If I had conducted the interview with Prof. Shneiderman, I would have spent much more time discussing the technical details of what is involved in modern AI systems. Throughout the interview, he discussed AI in the very abstract sense, generalizing dozens of different approaches together. For the points he was trying to make, this was a good thing to do - if you're mainly interested in how humans will interact with AI, it might not matter whether ReLU neurons are used or not.
However, when discussing questions like "what is good data" or "when will the predictions of AI be accurate", the details of the implementation suddenly become very relevant. While some architectures (such as support vector machines) are very sensitive to bad data, others (such as the GPT-3 network) are built so it has almost no impact upon them. A similar story applies when discussing machine learning as a "black box" - while certain types of ML have this problem, others such as decision trees are very transparent.
All in all, if I were conducting the interview I would have asked Prof. Shneiderman to discuss the specific architectures and approaches he was referring to throughout the interview, though understand why he chose not to mention these details.
I agree, @Gavin Uberti , that Ben Shneiderman kept his remarks at the right level for this interview. I also agree that sometimes, non-experts (not Ben!) make sweeping generalizations w/o understanding technical differences that *do* matter—so, good point!