In his interview, Professor Ben Shneiderman offered a much more nuanced perspective on artificial intelligence and what he referred to as the 'magic pixie dust' illusions commonly assoicated with AI and machine learning, and what they offer researchers and casual users alike. The idea of algorithmic hubris, the assumption that an ML model with optimal performance today will continue to work well in the future despite the dynamic and rapidly changing environment we have in this world was key to his argument that supplying models with more data and increasing their complexity does not guarantee they will continue to produce equally good, or better, predictions. That is, we might even find that overfitting as a result of overloading with new information in response to changes in our environment could actually make us worse off -- this is exactly what he described in the Google Flu trends project, which demonstrated that the algorithm failed despite early success, as a result of changes in the search behavior of the model and the particulars of Google's own search engine. These failures had very real and negative implications for public health outcomes, and implied that any field or decision-making based on AI/ML could have increased vulnerability towards failure and/or harming the intended audience.
I found this discussion to be a bit of a wake-up call for myself, being a blind fan of AI as I am, because it highlights the structural limitations of AI as many tools and applications essentially operate as black boxes; and, this is clearly leading to a downturn in public trust in these tools due to the lack of human oversight associated with the results they produce. I found myself agreeing with Professor Shneiderman in his assertion that AI should be taken as a supplemental tool and never as a partner in order to avoid confusing machine capability with human cognition, the latter being critical and irreplaceable,e especially in settings and jobs which require interactions with humans and living creatures (empathetic interactions), or decision-making in situations (clinical, legal, policymaker) which call for the ability to understand the needs of patients/clients/constituents within their broader sociocultural contexts. As such, as Professor Shneiderman argued, I believe we should move our focus away from anthropomorphizing AI tools and towards designing more accountable and explainable systems - that way, we can preserve the benefits of AI/ML without degrading the importance of human intellect and instinct.