Something I found very interesting and will definitely remember a year from now is the discussion about “ad hoc” additions to a model. If we are just concerned about the accuracy and utility of our models, then it would not really matter if the rules are more basic/apply to certain situations rather than being more fundamental and general. However, there are certain laws of nature observed that we as humans feel that we need to understand deeper even if we can reliably predict how they will act. This made me think about the difference between able to reliably predict outcomes versus fully understanding why these outcomes occur (if that can ever be achieved). It also made me think about some mathematical models that I have worked on, fitting parameters to data without ever really thinking about the underlying reasons these numbers work best.
I think the discussion about machine learning altering our perception of science will affect both mine and society's future. As machine learning and computing becomes more powerful, we may be able to use machine learning to accurately predict the outcomes of things that we don’t fully comprehend. As opposed to how we do science now, in machine learning it is much harder to interpret how conclusions are being drawn. This is because of how artificial intelligence operates, where it takes in the data and sends it through a series of optimized neural networks which performs operations at each step. For humans it is very hard to find out what is being done at each step, even though the artificial intelligence may be very accurate. Using artificial intelligence may allow us to make better predictions and scientific discoveries, but without understanding the underlying theories we won’t have the same concept of science. I wonder whether this will really constitute forward progress in science or if we will end up using technologies that we don’t fully comprehend.