I watched Prof. Alyssa's conversation with Prof. David Liabson, titled "Behavioral Economics." In this video, the pair discuss the more "human" side of economics -- Behavioral Economics-- in the context of its relationship with general prediction, reception and rationality, climate change, and more. Of particular interest to me was the section titled "What can machine learning help us predict?" in which Alyssa presents a 21st century version (shown below) of the Padua Rainbow, itself a concept explaining how predictions are formed from observations:
Using google maps as an example, she states that machine learning models, in the context of creating predictions, have eliminated three steps within the rainbow: (rule) making claims about how phenomena behave, (theory) proposing a set of relationships that make claims about the cause of certain phenomena, and (explanation) answering the questions ‘Why?’ and ‘How?’. While discussed in class already, I was able to hook onto the implications of machine learning much better about the graphic, and better understand why understanding why the psychology of how machines learn is become ever more important to fine-tuning what 21st century predictions mean.
If had the chance to go back, I would ask follow up questions to David's reasoning as to why he believes that both Padua Rainbows can exist concurrently in intellectual environments, in his words "working in parallel directions, sometime supporting each other." For context, Alyssa first poses a question asking why we can't effectively build ML models similar to Google Maps to answer more economically-focused predictions like the stock market. To Alyssa's question, David responds that we can't for two reasons, first, that the stock market aggregates so much data that it effectively acts as a random walk, and second, that we don't have enough big data to effectively train a model. He follows stating that though ML will begin to take a role in prediction in the 21st century, he doesn't think that it will ever 'crowd out' the historical approach we see represented in the 20th century Padua Rainbow. He's careful to note that more and more human decision making is based on machine learning. I found this point of particular interest given the rise of generative ML models like Chat GPT, which I argue is at odds with the traditional prediction/decision making process. In practice, we are beginning to see cases arise where-- in the specific context he gives of the intellectual environment-- we see ML models used as a tool to replace decision making rather than support it as a basis. In particular, I would ask whether or not ML models as a replacement for decision making would change his view on whether or not it's supportive to human-based predictions in education environments. I would also be interested in knowing whether or not reliance on machine learning changes how we view irrational decision making given that machine learning models, especially in educational environments, are a product of human irrational decision making themselves.
Other references: Paper explaining how generative language models are built and a few cases on outsourced execution, Article hypothesising how genAI/ML models will impact human behaviour
Yeah I think this is really interesting! Particularly the fact that Padua rainbows exist in intellectual environments. As well, the fact that stock markets are so random that they are unpredictable really surprised me! I'm not really an econ person so I thought that was eye-opening