I found Professor Goodman's conversation with David Laibson particularly interesting because behavioral economics and aspiring to "deterministic" models of human behavior are fascinating to me. However, their discussion of the "psychology" stood out to me. As PredictionX notes frequently, AI has removed multiple steps from the historical "Padua rainbow."
With AI, we can move directly from data to predictions, bypassing the rule, theory, and explanation steps. This may allow us to solve increasingly complex problems, but it also could have negative consequences. When we set AI lose on a problem, we don't always understand it's thought process. As Laibson points out, we may miss important flaws in how the AI reaches a conclusion, like when facial recognition AI has racial biases. As we move forward with implementing AI into more and more aspects of our lives, it may be crucial to invest in understanding the internal processes of AI.