Ben Shneiderman’s 2018 conversation on artificial intelligence was an interesting portal into the past, especially in its skepticism of AI hype and its focus on responsible design. I think his insistence that we view AI not as partners but as tools is still a powerful counter to the language often used today, where machines are anthropomorphized as “thinking” or “knowing.” AI systems are increasingly embedded into daily life, from medical diagnostics to content generation his emphasis on comprehensibility, predictability, and controllability feels more urgent than ever.
The example of Google Flu Trends, which initially showed promise but ultimately failed due to shifting algorithms and a lack of transparency, illustrates the very real dangers of algorithmic overconfidence. That said, some of Shneiderman’s views now feel limited in light of how quickly AI has advanced. Today's generative AI models, like GPT-4, are far more interactive and capable than anything imagined in 2018. While they are still tools at their core, the way we engage with them in co-writing, brainstorming, or coding does resemble a kind of partnership, even if only metaphorically. The challenge isn’t in rejecting the metaphor entirely, but in using it carefully and with nuance.
Overall, the video’s central argument that language matters and that AI’s real power lies in augmenting rather than replacing humans has stood the test of time. It reminds us that the most important questions in AI are not just technical but ethical, linguistic, and human-centered. As we move forward, we should heed Shneiderman’s call for transparency and accountability, while also remaining open to the evolving ways people and machines can meaningfully collaborate.