Something that surprised me in the conversation that Prof. Goodman had with Susan Murphy and Brendan Meade was the discussion of how humanity’s access to vast quantities of data and unparalleled computing power has changed the way “science” is done. It was fascinating to me how Dr. Meade described how earth scientists studying earthquakes try to remain “humble,” in the sense that they don’t claim to know all of the exact theoretical physics a priori. In past discussions of simulation in our course, we’ve emphasized the importance of the models on which these simulations are based, and how these models both directly reflect and parametrize the complex processes occurring in their settings. The idea that current science and future simulations could be based on recurring patterns before theoretical models and equations is an interesting one.
If I could add a question to the conversation between the three scientists, I would ask what they see as the possible shortcoming of this new “data science” approach to prediction. The group spoke briefly about how some of the “features” identified in data science algorithms are not entirely interpretable/communicable on the human scale/with human language. Is it possible that phenomena such as this compound into more and more abstract representations of the world or setting at hand? How do they manage the risk of relying on these algorithms that possibly miss the forest for the trees? Is it the case that as long as the outcome is “accurate,” we trust these models and predictions even if we do not know their exact machinery?