Behavioral Economics Links to an external site.
In the interview with Professor Laibson, the possible implications AI might have were discussed given that machine learning and prediction are never fully accurate and may have biases built in. However, human intellect has always been maneuvering human bias and uncertainty in human theories/predictions, even if we are not perfect at understanding these complexities. I would like to ask Professor Laibson and Professor Goodman, given that humans are also themselves somewhat self-correcting predictive systems if we should be as worried about the uncertainty and bias complexities of machine learning, given that our mind is trained to constantly double-check predictions, which could include AI predictions?