Ben Shneiderman mentioned that what goes on behind the scenes in machine learning should be transparent and monitored through diagnostics. I think this idea was on point because the AI models nowadays often clearly state their thought process when generating responses. For example, before ChatGPT outputs an answer, it would sometimes describe the task to its understanding and then think of ways to respond. This definitely helped make these models more transparent and controllable for developers working to further improve the models for accuracy and safety. In addition, responses often have follow-up warnings and other messages to ensure safe usage of the responses.
However, some of this prediction was not exactly correct. Even though the user can sometimes see the “thought process” of these AI models, much of what happens still remains unknown. Using ChatGPT as an example again, we see that sometimes it directly gives the response without explaining the thought process unless the user asks it to as a follow-up. In terms of the diagnostics part of the prediction, this is not really seen on the user end. Perhaps some models make this visible for developers, but this transparency is mostly lacking to the average user. Much of machine learning today still remains in the “black box,” and people generally are not aware of what computers are doing behind the scenes. Overall, Shneiderman’s prediction was not exactly on-point, but it did offer a rather accurate insight in that developers are trying to make these models more and more understandable and reliable.