In the Interview with Ben Schneiderman, I learned a lot about AI, not only the components of AI and machine learning but the current limits of this technology that more or less dictate its viability in certain predictive situations. I am not an expert in this technology field, so it was very interesting to learn more about the basic components and distinction of AI, especially the differences between machine learning methods and statistical methods. Before watching this video, I had a difficult time distinguishing the two methods from one another, which inhibited my understanding of artificial intelligence. What I learned from the interview in this context was more interesting than surprising but provided me with a better understanding of these distinctions. In the interview, two Harvard Undergraduate students worked alongside Schneiderman to highlight these differences. I was surprised that one of the students distinguished machine learning as providing the ability to look at a lot of different factors deeply to make strongly connected correlations that the student remarked were more of surface-level correlations with statistical methods. I had always had the understanding that statistical methods could also allow for a deeper understanding of connections rooted in quantitative knowledge, but from this video, it becomes clearer that there are both quantitative and qualitative aspects to AI that allow for better connections between idea/theory conception to then predictive modeling and understanding. Another interesting idea brought up in the interview was how one must be wary of making misguided assumptions about the correctness of the power of AI. Instead, we need to realize that there isn't "magic pixie dust" that creates these connections and that "mindless acceptance of their potency is extremely dangerous."
One last surprising thing that I learned from this video is that AI can often be less accurate in its predictions compared to statistical methods because of the complexity of AI components. Schneiderman also highlights the danger of putting too much trust in the predictions of AI as a worrying understanding that many people have. This idea was the most surprising to me because before watching this interview, I felt that AI was something that would be able to more correctly predict compared to just humans evaluating statistical models, and yet experts in this field seem to say that this technology is a valuable tool but that its produced knowledge and predictions should be taken with some skepticism such as the problems with the Google flu trends in 2009.
Saying "AI can often be less accurate" than traditional methods, while true, obscures just how diverse types of AI and even machine learning can be. It might be better to frame this in the light of "different predictive methods work better and worse on certain problems, whether AI or non-AI". For example, while neural networks and search have come to dominate go and chess, they fail badly when put against partial-information games (for example, poker AIs usually use counterfactual regret minimization techniques).