The most surprising thing i learned is that we'll never truly be able to remove human biases from the way that we train machine learning. This was surprising because due to the nature of technology we would assume it to be entirely objective however it is developed by humans. This can have in flaws in an ML software that does facial recognition, which could cause extreme mix ups due to faulty programming in the backbone. It would also be challenging to remove these biases, because often times humans don't know that they're being biased so this emphasizes the need for appropriate guard rails around ML which Laibson emphasizes. ML is a form of prediction that can be powerful but only when used under the correct scope.
top of page
bottom of page