The part of this interview that I believe will stick with me in a year pertains to the differences between how the traditional scientific method moves from data to prediction contrasts with how machine learning/AI accomplishes the same task. Humans partake in intermediate steps -- data to rules, rules to theories to explanations finally landing on predictions whereas algorithms go straight from datasets to predictions with what can only be described as a "black box process" between the two. This latter struck me as fundamentally un-human until Dr. Hall mentioned that this is essentially human infants learn (using the example of the dog). I found it fascinating that somewhere along the way, our developing brains add steps between experience (data) and judgments (predictions).
Similarly, it is this aspect of the interview that I found most salient concerning the future. This data-to-prediction method is essentially at direct odds with both the scientific method and how humans create meaning from experience, how they use data to create judgments/predictions in everyday life. I can't help but feel pessimistic about the prospect of machine learning/AI existing as a tool to aid humanity without fearing that the convenience and expedience of widespread data-to-prediction capabilities might erode or displace the more traditional multi-step method. I believe (and I don't think this is an uncommon stance) that the ideal positioning of AI is as a helper to human progress and innovation (think sidekick for coding) as opposed to supplanting it entirely (like AI-generated art). Again, the stark difference between the way that humans come to predictions and machine learning/AI systems do makes me believe that the latter arrangement described in the previous sentence might become all too common.