The discussion with Ben Shneiderman was surprising in general: I wasn't expecting, but came to appreciate, the importance he placed on specificity of language when discussing AI, its existence as a tool and not a partner, and "the future of the future." However, the most surprising thing I heard was about algorithmic hubris and the often undue trust placed in algortithms to "get it right." I can see this hubris contributing to a problem that Shneiderman later mentioned, or technological tools presenting racism, anti-semitism, sexism, etc (see example below, where Black natural hairstyles come up in response to the search for 'unprofessional hair for work').
Those prejudiced or stereotypes responses may be reified if we take the algorithm to be objective, right, or true, when in reality there are ultimately humans with their human biases behind it all. To stave of this hubris and dangerous unquestioning attitude towards AI, Shneiderman suggests studying past failures and how they came to be. This approach seems to me to match the important "Evaluate Accuracy" and "Make Changes" cycle on the Predicitive Systems Framework of modern times, if on a slightly more meta level.