While not necessarily surprising, I thought the commentary about "computer-generated" nitrogen fertilizer problem piqued my interest the most (along with the importance of language when talking about these problems). The analogy of misuse of algorithms to improper design with airbags I feel is particularly important to the conversation of the role of AI in important decisions in the world. Humans have what I believe is an obligation to use AI responsibly as a means to supplement our own knowledge and abilities where we have the final say so rather than an end in itself, as in the fertilizer example. I do think we must focus creating the best, ethical designs on applications that have real life consequences, especially in the age of tools like ChatGPT.
I would love to ask about the implications on prediction of the more powerful generative AI models that have been built recently (ChatGPT, DALL-E, etc). A lot of these applications have been used mostly for entertainment or exploration reasons. I, however, wonder as more people explore the abilities of these new technologies and sharing them with the world, how does Ben Schneiderman think this use will shift how humans use technology to predict things in the real world? Because the underlying technology of many of these generative systems is AI predicting good answers versus bad answers, I also wonder Schneiderman's thoughts on whether "better" generative models (whether that's measured in more realistic output or more truthful output) will affect the trust we put in using AI to predict outcomes?