In his talk with Alyssa Goodman, David Laibson at 25:26 mentions that we need to be wary of the perils of ML by educating people about what exactly machine learning can and can't do. He then goes on to discuss the consequences of a world in which we blindly trust AI, but doesn't elaborate on his previous point. What, in his opinion, really are the limitations on ML? How far can we take it before its usefulness runs out? Will it ever completely replace Alyssa's rational prediction making system of just skipping over the rule/theory and going straight from data to prediction?
top of page
bottom of page