top of page

Forum Posts

Grayson Kemplin
Harvard GenEd 2023
Apr 16, 2023
In Wealth
Prof. Goodman's interview with Prof. Laibson (linked here) was both a very interesting and odd experience, considering I am taking both of their classes simultaneously. It is especially remarkable to see the concepts two distinct fields/classes interlinking and material I've learned about in very different context come to hint at some larger intellectual umbrella. One point that I found interesting from the interview itself was an idea that I think was encapsulated both by Prof. Laibson's discussion of the fatter bell-curve of climate change and Prof. Goodman's suggestion that economics is a field with a lot of "air resistance" (in that the forces that cause error in economics area lot more prevalent) - that is, a larger question of how to make sufficient progress and models in environments with lots of error, and especially communicating it to the public. It's not an issue I've recognized so distinctly in economics in the past, and it makes me wonder why economics is treated as highly as it is by the public when it has the same amount of error as weather or climate, fields that seem to be constantly lambasted for errors. One topic that I would have explored further within the interview is the parallelism made between Aristotelean simplification and rational choice theory. Although I do largely agree with the overall idea, I think that some of the distinctions between them serve important roles in understanding the development of how we think about predictive models. As much as I recognize rational choice theory's flaws, I will still take its defense somewhat and push back on the idea that it has a track record as bad as Aristotle's persistent misinformation on everything except for basic biology - it still does a surprisingly good job at explaining [if simplistically] how aggregate decisions rise from individuals, why cooperation can be born out of highly competitive systems, and exploring what making a decision is on a basic level. It's not perfect ground, but I think it's still compatible with behavioral theories in fields like behavioral game theory in ways that are productive, whereas something like Aristotelean physics is completely outmoded by modern approaches. This begs the question, can we develop more nuanced classifications for "wrong" models that pick up on distinctions like these while still drawing out the fundamental and valuable points made by the parallel on a basic level?
1
1
20
Grayson Kemplin
Harvard GenEd 2023
Apr 10, 2023
In The Future of the Future
After watching the Artificial Intelligence interview with Prof. Shneiderman (and watching some of his lecture at the Radcliffe Institute), one thing I was struck by was the attention to the importance of visual presentation methods for the success of various kinds of technology. Although it seems somewhat obvious in retrospect, I never quite understood the extent to which computers and smartphones present massive amounts of information through simple features of their GUI. This was even more apparent to me after listening to Prof. Shniederman's accomplishments during the Radcliffe talk - it amazes me how much planning, study and design goes into features like blue hyperlinks and touchscreen keyboards. This poses a unique challenge to AI designers, as it is not clear how to best pose AI in a visual way since AI currently lacks self-explanatory power. One part of the talk that made me extremely excited and I wish lasted longer was Prof. Shneiderman's reference to Technics and Civilization by Lewis Mumford. I binge-read this book the summer before my freshman year, and I put it in a very small canon of works I consider to be quintessential masterpieces. Although I find myself interested in economics and attempt to approach issues like technological progress under a strictly social-scientific framework, Mumford forces us to realize on a fundamental level that our interactions with technology are determined by the sociocultural dynamics and myths we adopt. Whilst I think Prof. Shneiderman's use of it to highlight the obstacle of animism was effective, I believe there is so much more in that book that brings significant input to the discussion of AI. The reason why language is so important [as Prof. Shneiderman carefully highlights] is a need to manage (and in some cases combat) myths that foster around technology. For instance, I've found this point particularly important when discussing the labor-market doomsaying claim that automation and AI will displace the majority of human work. But we as a species have been making variations of this exact claim like clockwork for the advent of every new technology, and on an empirical level the probability that AI will displace a majority of the labor market is so low we may as well treat it as zero. And yet, many [intelligent] people continue to buy into the narrative. Why? It's a complicated question, but if Mumford is to be believed, it is a narrative that is subconsciously derived from the framework our socioeconomic institutions create. Perhaps it is a manifestation of the fear of displacement that the managerial class in outmoded sectors will face, one that is then converted into a language that is more relatable to laborers in general? In any case, one could spend a long time discussing Mumford's work, and I would have enjoyed more references to it.
0
1
18
Grayson Kemplin
Harvard GenEd 2023
Mar 26, 2023
In Earth
One of the more intriguing parts of Tim Palmer's interview (audio linked here, transcript here) was his caveat on using a more adaptive mesh approach to modeling weather patterns. Much of the first 30 minutes of the interview discusses the issue of modeling physical behavior using grids, where grids made of smaller squares (higher resolution) can capture more detail but are increasingly difficult to model and compute [more information here]. Prof. Goodman references a technique used adaptive mesh refinement, which essentially allows for more resolution in important areas at the cost of less resolution in other areas (explained in more detail in prior source) and speculates on whether the technique is helpful for modeling regions like the Bay Area, which experience large fluxes in weather conditions. However, Palmer pushes back on this notion by asking the question of where the additional detail would really go - would it be on the Bay Area itself, or in a larger region outside the Bay Area that allows one to forecast some time off? It seems the point there is essentially that every square is somewhat crucial for accurate meteorological predictions, which makes a tool that is crucial in other fields largely unhelpful in weather modeling. If I had been part of the interviewing process, the one point I would have perhaps continued is the discussion of Palmer's idea of the climactic Turing Test, which is an assessment of models via whether or not it can "pass" as real-world data under the inspection of a trained expert [the study that references this is here]. In the case of weather, it can do so handily - in the case of climate, we are still quite far. When listening to that part of the conversation, I was struck by how it mirrored in many ways the historical transformation of the overall concept of prediction, which started as rooted in a humanistic approach but slowly developed towards the more distanced approach we know today. In other ways, it represents the still present rationalist/empiricist [or justificationist/critical rationalist] conflicts within our approach to modern science, as well as the lingering question of how predictive machine models can contrast (or complement) an explanatory theory.
1
0
26

Grayson Kemplin

Admin
Harvard GenEd 2023
+4
More actions
bottom of page