One of the most surprising and interesting things for me from this interview with Ben Shneiderman was his concerns about language. He flagged the use of the word “partnership” when referring to AI software as an important issue to him. He explained that machines should never be seen or described as partners or collaborators, but rather as “tools.” Especially amidst rising conversations surrounding AI, the language we use has a great impact on our understanding and perspective of AI’s roles and applications. I also liked his explanation of why we should not just say the tool “mimics” human actions (i.e. a telescope doesn’t mimic our eye or an ipod doesn’t mimic a musician). He made it clear that the goal of good design should be to build machines that are tools that empower people, rather than those that mimic or replace humans.
If I were conducting the interview, I would have pressed further about these concerns surrounding language. How can people be most precise and thoughtful about their language throughout their design process? Does the issue start with the goal of the project? How does the language the media uses impact the public perception of AI? Especially with fears of machines already displacing jobs and with an abundance of sci-fi apocalyptic stories of robots taking over, I think the boom of conversation surrounding AI can head in a scary direction. I would also ask more specifically, why is it dangerous for companies to use human metaphors to indicate successful machines? How does that impact our societal perception of its capabilities and limitations? I think there are many things that impact people’s perception of AI, whether that be their education, generation, or area of work; however, it is especially interesting to think about how much of our understanding started with the way companies advertised their products or software. I think another very important question to ask would be: with the increasing prominence of AI in various fields, how should the language we use vary across those applications? I think the range of and variation between applications can cause further confusion, or even fear, surrounding the rise of AI. Thinking about AI as a tool is an important first step, but what specific terminology is best in legal or policy decisions, as opposed to medical care or the financial sector?
I think these concerns are highly important - I will say the book he referenced, Technics and Civilization, covers this to a great extent. One point I think your comment discerns that Mumford really elicits in his book is the role of various political and socioeconomic institutions in shaping the unconscious norms we have in mind when confronting emerging technology. How different sectors should approach the language approach is a tricky question, and I suppose it walks a fine line between what language is "comfortable" for that area and what is genuinely accurate.