David Laibson, in his interview on Behavioral Economics, said he would never expect to see the result of understanding physics to the point of creating an accurate prediction for anything and everything. We can never know everything due to an Uncertainty Principle and a degree of randomness in the universe. My question to him then would have been to ask for his thoughts on the use of AI being trained on previous events to be able to inform economic policymakers or whether previous events would be too historically flawed and biased to ever have this be a feasible option for contemporary society. Will AI always have to be used hand in hand with human oversight? – or is there a way that Behavioral Economics can inform policymakers and possibly correct human bias.
top of page
bottom of page
I find this to be a very interesting question. Though we are now generally more conscientious regarding bias in the data we use to train models, our data today still has the same possibility as historical data in training our AI to be flawed and biased. Just how much should we account for human bias in order to be responsible while consulting AI to make decisions or predictions?