Link to interview -
While not necessarily surprising, I thought the commentary about "computer-generated" nitrogen fertilizer problem piqued my interest the most (along with the importance of language when talking about these problems). The analogy of misuse of algorithms to improper design with airbags I feel is particularly important to the conversation of the role of AI in important decisions in the world. Humans have what I believe is an obligation to use AI responsibly as a means to supplement our own knowledge and abilities where we have the final say so rather than an end in itself, as in the fertilizer example. I do think we must focus creating the best, ethical designs on applications that have real life consequences, especially in the age of tools like ChatGPT.
I would love to ask about the implications on prediction of the more powerful generative AI models that have been built recently (ChatGPT, DALL-E, etc). A lot of these applications have been used mostly for entertainment or exploration reasons. I, however, wonder as more people explore the abilities of these new technologies and sharing them with the world, how does Ben Schneiderman think this use will shift how humans use technology to predict things in the real world? Because the underlying technology of many of these generative systems is AI predicting good answers versus bad answers, I also wonder Schneiderman's thoughts on whether "better" generative models (whether that's measured in more realistic output or more truthful output) will affect the trust we put in using AI to predict outcomes?
Very interesting question of how A.I will come to change our world and aid us in important decision-making processes. I think this aid should remain very limited for now, however, as we already see how badly this can turn out. I heard just this week of a recent example in which a judge in India used chatGPT to look up a law to determine whether he would send a suspect to jail or not. Legal processes like these, especially in the modern age, have long been championed as only being influenced by peers who have empathy and can grasp the situation more than a computer. This touches on the "partner vs. tool" idea that Shneiderman touched on as well where it has to be asked: if we trust A.I now to make court decisions that have long been thought to only be trustworthy in the minds of your peers--do we now consider A.I a partner and not a tool? Very interesting post!