top of page
Forum Posts
otrefanenko
Harvard GenEd 2025
Apr 23, 2025
In Thoughts from Learners
In his 2018 interview, Shneiderman draws on Lewis Mumford’s 1934 work Techniques and Civilization to describe what Mumford called the “obstacle of animism”- our reflex to imbue new technologies with human or animal characteristics rather than treating them as tools in their own right. It reminds me of what we often talk about after watching all these scary films about robots and talking how similar to humans they seem. As Shneiderman explains, “every technology goes through an early thing in which the design is meant to mimic the human form, or animal forms.”
Mumford’s and Shneiderman’s retellings show that early designers sought to replicate life, from automata carved as dancing birds to mechanical figures in cathedral clocks, believing that human‐like form would make machines more acceptable. Yet, this very mimicry often sidelined simpler, more robust solutions, delaying progress until engineers fulfill the urge to “play human” and focus instead on functionality. So, as he warned, these attempts to “mimic human form or action” will delay successful technologies.
Today, we can kind of see the same pattern in chatbots with overly “friendly” avatars, voice assistants that insist on “small talk” (because why Siri and Chat GPT ask me how my day is...), and robots built with cartoonish faces – efforts that do little to improve reliability or transparency. By treating AI as a quasi‐partner rather than a statistical tool, we end up prioritizing persona over performance, echoing Shneiderman’s warning that animistic design hampers the true potential of our creations.
Moreover, as for me personally, it seems that this is also some propaganda – both for the developers and for the consumers. First, because this way, they try to justify what they are creating or also try to make it more real, creating an illusion that they actually create intellect. Meanwhile, on the latter, it acts by making them feel closer and more attached to the robots, portraying them as more humane. So like, it makes the consumers say “Hi” before asking for help, or “please” and “thank you.” Not sure what the goal of this is and what the end result will be, but it’s still a very interesting (and a little bit disturbing) thing to witness.
Shneiderman insists that only “when you transcend that, and you now think of the way you build tools that empower people, do you really get the powerful technologies.” By abandoning the impulse to anthropomorphize and instead focusing on clear interfaces, explainability, and predictable behavior, we unlock AI’s real promise: augmenting human creativity, improving decision‐making, and building systems we can understand, trust, and control.
0
0
4
otrefanenko
Harvard GenEd 2025
Apr 15, 2025
In Thoughts from Learners
Reading through both the interview transcripts got me thinking: What if our search for extraterrestrial signals reveals more about our own stubborn limitations than about any aliens out there?
On the one hand, among many of us, there's the hopeful view that finding a signal (those elusive “frequency” or “time compression” bursts we expect from our technology) could unify us as “Earthlings” and break down our differences. Then, during the discussion with Jill Tarter, Professor Goodman half-jokingly said, “Clearly that’s alien technology” when talking about curious anomalies like Amoamua, and mentioned that Avi Loeb even wrote a paper suggesting we take these oddities seriously. That moment got me thiking: if we do spot something that’s unmistakably out of the ordinary, could our deep-rooted scientific assumptions and cultural biases actually cause us to dismiss or even change the truth, just to keep things familiar?
This question is hard to answer because it forces us to face our own resistance to change. And with change, one can never be certain whether it would be a positive or a negative one. While Tarter’s words inspire a vision of what might happen if we embrace the discovery and evolve as one people, Loeb’s perspective introduces a dose of reality: our methods and traditions might blind us to the truly transformative. And, when compared to the car that passes by on the street, there are too many details that we might not think of (maybe because the system we plan to apply is fully working for Earth but not for the Sun, but we don’t know about it) when exploring something new. If our instruments are tuned to only see what we expect, then even a genuine alien signal might get swept aside as another anomaly to be “explained” away. It feels like the real challenge isn’t just finding extraterrestrial life—it’s about whether we’re brave enough to let that discovery shake up everything we thought we knew about ourselves.
I think it’s a problem not just with astrophysics, but with all science in general. However, I also don’t know if it’s possible to be eliminated fully.
0
0
3
otrefanenko
Harvard GenEd 2025
+4
More actions
bottom of page