One prediction Ben Shneiderman made in 2018 that really stuck with me was his argument that computers—and especially AI—should be seen strictly as tools, not partners or collaborators. He warned that calling machines our “partners” could mess with how we think about them and lead us to trust them too much or expect too much from them. Honestly, that’s aged really well.
Fast forward to now, and he was spot on about the language problem. People constantly say stuff like “ChatGPT said this” or “AI figured that out,” like it’s a person making decisions, not a program running on patterns. We talk to AI tools like they understand us, even though deep down we know they’re just crunching numbers and spitting out probabilities. It’s easy to see how this creates confusion—not just for casual users, but even for people building or deploying AI systems. Shneiderman was right: how we talk about AI shapes how we think about it.
That said, I think he might have underestimated how real these tools can feel today. Like yeah, they’re still just software, but when something responds to your writing in real-time or makes images from your words, it feels different from using a hammer or a telescope. Even if it’s technically a tool, it interacts in a way that’s way more social. So I get why people slip into calling it a partner—it’s not right, but it’s relatable.
So overall, I’d say Shneiderman was totally right to push back on the hype and to remind us to be careful with how we frame AI. But at the same time, the way people use AI today blurs the line more than he probably expected. The language he warned us about isn’t just hype—it’s also a reflection of how these tools feel in everyday use.