Something that surprised me when I watched Professor Goodman’s conversation with Ned Hall was the discussion that they had about how we practically use models to promote understanding and make predictions. Dr. Hall brings up an example from his time as an undergraduate where he predicted different quantities using a model that combined elements of classical and quantum mechanics, despite the fact he knew the theory he was using was inconsistent and his model was therefore “wrong.” This reminded me of a phrase I’ve heard many times throughout my time at Harvard, which is that “all models are wrong, but some are useful.” The idea that a model’s value comes more from its utility than its accuracy is an interesting one in the context of the two professors’ opinions about machine learning. To what extent are we willing to use “incorrect” models if there is still something to gain from them?
I really enjoyed the point Dr. Hall made about the reason humans do science: it is not just to make predictions, but also to understand the world. If I had conducted this interview, I would have additionally asked Dr. Hall about philosophical ideas related to causal inference. When we make predictions, we are trying to figure out something about the future (or some other unknown) given what we know now. When we investigate causality, we are attempting to create a direct link between two phenomena. I would be very interested in hearing how modern statistical ideas about causality have developed from philosophical underpinnings and the philosophical distinctions between creating a prediction and establishing a causal relationship. Why do so many people conflate or confuse the two in practice?
Here's the link to the interview: https://www.labxchange.org/library/pathway/lx-pathway:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954/items/lx-pb:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954:lx_simulation:8bf7271d?source=%2Flibrary%2Fclusters%2Flx-cluster%3AModernPrediction