In his 2018 interview, Shneiderman draws on Lewis Mumford’s 1934 work Techniques and Civilization to describe what Mumford called the “obstacle of animism”- our reflex to imbue new technologies with human or animal characteristics rather than treating them as tools in their own right. It reminds me of what we often talk about after watching all these scary films about robots and talking how similar to humans they seem. As Shneiderman explains, “every technology goes through an early thing in which the design is meant to mimic the human form, or animal forms.”
Mumford’s and Shneiderman’s retellings show that early designers sought to replicate life, from automata carved as dancing birds to mechanical figures in cathedral clocks, believing that human‐like form would make machines more acceptable. Yet, this very mimicry often sidelined simpler, more robust solutions, delaying progress until engineers fulfill the urge to “play human” and focus instead on functionality. So, as he warned, these attempts to “mimic human form or action” will delay successful technologies.
Today, we can kind of see the same pattern in chatbots with overly “friendly” avatars, voice assistants that insist on “small talk” (because why Siri and Chat GPT ask me how my day is...), and robots built with cartoonish faces – efforts that do little to improve reliability or transparency. By treating AI as a quasi‐partner rather than a statistical tool, we end up prioritizing persona over performance, echoing Shneiderman’s warning that animistic design hampers the true potential of our creations.
Moreover, as for me personally, it seems that this is also some propaganda – both for the developers and for the consumers. First, because this way, they try to justify what they are creating or also try to make it more real, creating an illusion that they actually create intellect. Meanwhile, on the latter, it acts by making them feel closer and more attached to the robots, portraying them as more humane. So like, it makes the consumers say “Hi” before asking for help, or “please” and “thank you.” Not sure what the goal of this is and what the end result will be, but it’s still a very interesting (and a little bit disturbing) thing to witness.
Shneiderman insists that only “when you transcend that, and you now think of the way you build tools that empower people, do you really get the powerful technologies.” By abandoning the impulse to anthropomorphize and instead focusing on clear interfaces, explainability, and predictable behavior, we unlock AI’s real promise: augmenting human creativity, improving decision‐making, and building systems we can understand, trust, and control.