The interview with professor Laibson was very insightful and the most surprising bit of information is how our current climate prediction models could be way off the charts. From previous classes I learned that the Intergovernmental Panel on Climate Change (IPCC) projects that the continued usage of fossil fuels will result in the emission of about 25 to 30 gigatons of carbon dioxide with an atmospheric concentration over 1000 parts per million before the end of this century and this could melt almost all the ice caps and could cause global sea level would rise approximately 70 meters (approximately 230 feet). The interview showed me that predictions like this could be very misleading because Climate change prediction models are based on historical data and trends, but they may not account for the full range of potential outcomes, including extreme climate events simply because extreme weather events are rare and may not be well-represented in the historical record.
In the interview, Professor Laibson shares his view on machine learning and advocates that everyone, especially students should know much about machine learning and also consider the fact that the models we create using machine learning could be unintentionally biased based on our personal opinions. People usually would use ML for social good but would still face some difficulties . An example from the chatgpt discussion we had two weeks ago revealed that some AI models created through ML develop and exhibit patterns which are not intentionally embedded in their system. I also know some individuals would use it for bad. So my question is since there are ethical implications of using machine learning algorithms, how can we ensure that these algorithms are used in a responsible and equitable way? Do we need an independent body to oversee all the work being done through machine learning?