I had the pleasure of taking David Laibson's Ec10 course four years ago, so it was great to hear his words of wisdom once again. The most surprising thing I heard him say in this interview with Prof. Goodman was during their discussion of machine learning. It was very interesting to listen to Prof. Laibson immediately stress the importance of the training algorithms we use to develop these models, and he points out that they will be biased in some way as a result of the data they are trained on. In 2024, this is one of the biggest hurdles we are seeing with LLMs. Google has had to recall its AI over alleged left-wing political bias, and other AIs are being trained in the opposite direction. David Laibson put his impressive predictive ability on display by pointing this flaw out quickly. He calls it a "fantasy" that we are "somehow escaping human error" by using machine learning, and he is absolutely right. ChatGPT does not have all the answers and actually will refuse to answer many questions for this exact reason.
top of page
bottom of page