I found Professor Goodman's conversation with David Laibson particularly interesting because behavioral economics and aspiring to "deterministic" models of human behavior are fascinating to me. However, their discussion of the "psychology" stood out to me. As PredictionX notes frequently, AI has removed multiple steps from the historical "Padua rainbow."
With AI, we can move directly from data to predictions, bypassing the rule, theory, and explanation steps. This may allow us to solve increasingly complex problems, but it also could have negative consequences. When we set AI lose on a problem, we don't always understand it's thought process. As Laibson points out, we may miss important flaws in how the AI reaches a conclusion, like when facial recognition AI has racial biases. As we move forward with implementing AI into more and more aspects of our lives, it may be crucial to invest in understanding the internal processes of AI.
I think you bring up a really interesting topic of our knowledge of how artificial intelligence and machine learning works. Although it is true that we may not be able to see every step of the prediction making process based, I do think it is possible that we can program the machine to learn in a way that we can also understand. For example, looking for specific evidence or teaching the machine how to interpret certain data. Even though there are flaws in how the AI reaches a conclusion, I do think it is possible to improve it by improving the way we, as humans, implement AI.