top of page

Forum Posts

Hunter Amos
Harvard GenEd 2023
Apr 12, 2023
In Health
In Sir David Spiegelhalter's conversation with Prof. Goodman, the pair discuss climate change simulation and the need for consolidation of efforts from different groups to make a more accurate prediction. At the beginning of their conversation, they have a focus on how every model is imperfect in their own way due to a difference in assumptions, resources, and expertise. It follows that having multiple groups working on simulations and models could provide a range of models, and the discrepancies between them could be amalgamated by a super judge to provide a better understanding of the problem. More specifically, Alyssa suggests that there should be a CERN (European Organization for Nuclear Research: an intergovernmental organization that operates the largest particle physics laboratory in the world) for climate change, where all the groups that are doing big simulations and models of climate change can consolidate their efforts. The reason for this is that if you look at the IPCC report, there are multiple groups that contribute their best estimate of what's going to happen, and statistically, people essentially just average those models and make a big band of uncertainty. However, since every model has its imperfections, there are unforeseeable or unavoidable factors that can affect the outcomes. Therefore, David argues that a multiplicity of approaches, independent approaches in conjunction with collaboration is more valuable. He suggests that there should be multiple independent groups doing the same thing and producing different estimates with non-overlapping intervals. This way, if one of them had been doing it right, they would have had a narrow interval, and the others would have had wider intervals. This can help us better understand the limitations of our models and avoid giving the public the false impression that we know everything. I found myself super engaged in their conversation, as I enjoyed the focus on the nuance between between collaboration in solutions vs. a variety of solutions, which are inherently similar but not the same. Overall, my takeaway was that though collaboration can be good, we may see a larger benefit when collaboration adds a variety of methods/testing (in this case, of climate change) to approach a solution; i.e. make sure many actors are employed toward answering the sample question, but take advantage of collaboration by using contrasting methods with contrasting assumptions to see if results agree. On the note of assumptions and collaborations, I noticed that this talk centered around what collaboration looks like on a higher level, more talking about collaborations of institutions rather than how that information is disseminated. This made me think about an article I read a while ago in Economic Impact, which concluded that "Governments cannot do it alone. Neither can the private sector, nor philanthropists, nor civil society. We need an economic transformation to end nature loss by 2030, reach net-zero emissions around 2050, and build resilience to the unavoidable impacts of climate change. All this must be done while developing sustainably and eradicating poverty." The article further specifies what collaboration should look like in saying that campaigns but be led by those most effected and be back by researched institutions and private stakeholders. In lieu of the conversation between Alyssa and David, I would ask both of them how collaboration would look like between those most effected by climate change and institutions, given their ideas about communications between research institutions. More specifically, I would ask the two who they think the burden of communication/accessibility is on: the institutions for producing research meant to be leveraged by the public or to the organizers to do the same? Following this, do you think private corporations benefitting from climate change have a responsibility to collaborate with these two groups and in what capacity?
1
2
15
Hunter Amos
Harvard GenEd 2023
Mar 28, 2023
In Wealth
I watched Prof. Alyssa's conversation with Prof. David Liabson, titled "Behavioral Economics." In this video, the pair discuss the more "human" side of economics -- Behavioral Economics-- in the context of its relationship with general prediction, reception and rationality, climate change, and more. Of particular interest to me was the section titled "What can machine learning help us predict?" in which Alyssa presents a 21st century version (shown below) of the Padua Rainbow, itself a concept explaining how predictions are formed from observations: Using google maps as an example, she states that machine learning models, in the context of creating predictions, have eliminated three steps within the rainbow: (rule) making claims about how phenomena behave, (theory) proposing a set of relationships that make claims about the cause of certain phenomena, and (explanation) answering the questions ‘Why?’ and ‘How?’. While discussed in class already, I was able to hook onto the implications of machine learning much better about the graphic, and better understand why understanding why the psychology of how machines learn is become ever more important to fine-tuning what 21st century predictions mean. If had the chance to go back, I would ask follow up questions to David's reasoning as to why he believes that both Padua Rainbows can exist concurrently in intellectual environments, in his words "working in parallel directions, sometime supporting each other." For context, Alyssa first poses a question asking why we can't effectively build ML models similar to Google Maps to answer more economically-focused predictions like the stock market. To Alyssa's question, David responds that we can't for two reasons, first, that the stock market aggregates so much data that it effectively acts as a random walk, and second, that we don't have enough big data to effectively train a model. He follows stating that though ML will begin to take a role in prediction in the 21st century, he doesn't think that it will ever 'crowd out' the historical approach we see represented in the 20th century Padua Rainbow. He's careful to note that more and more human decision making is based on machine learning. I found this point of particular interest given the rise of generative ML models like Chat GPT, which I argue is at odds with the traditional prediction/decision making process. In practice, we are beginning to see cases arise where-- in the specific context he gives of the intellectual environment-- we see ML models used as a tool to replace decision making rather than support it as a basis. In particular, I would ask whether or not ML models as a replacement for decision making would change his view on whether or not it's supportive to human-based predictions in education environments. I would also be interested in knowing whether or not reliance on machine learning changes how we view irrational decision making given that machine learning models, especially in educational environments, are a product of human irrational decision making themselves. Other references: Paper explaining how generative language models are built and a few cases on outsourced execution, Article hypothesising how genAI/ML models will impact human behaviour
Behavioural Economics, Machine Learning, and Education content media
0
1
14

Hunter Amos

Harvard GenEd 2023
+4
More actions

content brought to you by

Font-Harvard-Logo.png

follow us

watch us

  • Youtube

listen to us

imageedit_8_5179044945.png
bottom of page