top of page

Forum Posts

vincentli
Harvard GenEd 2021
Harvard GenEd 2021
Apr 27, 2021
In Thoughts from Learners
In his interview, Stuart Firestein talks about his course called Ignorance at Columbia. I thought that this was an interesting title for the course because it emphasized that we as humans don’t know much about the world. The more we learn, the more we realize there is more to learn. This attitude of humility leads us to question past scientific findings and authorities, in pursuit of more accurate statements about the world. It is also interesting that Firestein critiques the education of science in schools, which is often focused on reading and memorizing facts rather than developing a questioning mindset and realization of how much we don’t know. My question for Firestein is how do we take into account intrinsic human ignorance when accounting for uncertainty? Is there a tendency for humans to be overly confident in their estimates of uncertainty? Or as humans realize how much they don’t actually know, will they overcorrect when they estimate uncertainty? Perhaps a collection of crowd-sourced estimates of uncertainty can reduce uncertainty in these estimates of uncertainty.
0
0
3
vincentli
Harvard GenEd 2021
Harvard GenEd 2021
Apr 22, 2021
In Thoughts from Learners
In the interview with statistician Susan Murphy and earth scientist Brendan Meade, I was interested by the difference between detection of current events and prediction of future events, along the spectrum of frequent to rare events. For the prediction of rare events such as major earthquakes, Meade uses a larger quantity of smaller earthquakes to better understand the physics of earthquakes and provide more data for larger earthquakes. On the other side of the spectrum, Murphy detects more frequent events, such as when you are stressed. However, she notes that detection is much easier than prediction of future stress. As a result, work in mobile health is experimenting with various intervention plans to help predict and influence future health conditions such as stress. A question I had for Meade is based on modern data collection of the millimeter-scale movements of tectonic plates and data on smaller earthquakes, how accurate and reliable are simulations of large earthquakes in predicting the location and time of future major earthquakes? While he mentions that a clear geodetic precursor has not yet been identified for earthquakes, can simulation help identify one? Another question I had for both Murphy and Meade is how important is interpretability of model predictions in both mobile health and predicting earthquakes and natural disasters? If accuracy and fit is high but the features are not interpretable, is that sufficient given appropriate responses and interventions? Or should people have some understanding of why the model is predicting that they are at risk of harm? It seems that society can help shape this attitude, whether people “blindly” trust the models as long as the results are good, or whether they want some understanding and “control” over intervention.
0
0
5
vincentli
Harvard GenEd 2021
Harvard GenEd 2021
Apr 13, 2021
In Space
In her interview, astronomer Jill Tarter discusses looking for extraterrestrial signals and techno-signatures without actually knowing exactly what to expect. As a result, astronomers must pay attention to a wide range of frequencies from 1 to 10 GHz. In particular, they are looking for signals that cannot occur in nature. For example, frequency compression by transmitters can cause radio signals to be too narrow to occur in nature and optical laser signals to occur at unique frequencies. I found this concept of looking for something but not knowing what it will look like interesting because there are several layers of uncertainty involved. How do you measure each level of this uncertainty? A question I would like to ask on this point of quantifying uncertainty is: can historical data help to identify extraterrestrial signals? In particular, can historical data provide a range of expected “natural” signals, and any signal that deviates a certain amount from the expected norm could indicate an extraterrestrial signal? It seems that as we learn more about science and collect more observations and data, the qualifications for a signal to be identified as “extraterrestrial” increase. In fact, Tarter mentions that extraterrestrial explanation is often the “last resort,” when science and other methods fail. For example, UFO sightings were recorded in history because people had no explanation for them. But now, we may wish to debunk some of them. Even if today we declare a signal is extraterrestrial based on our current understanding of science, will it get debunked as we learn more about science?
0
1
11
vincentli
Harvard GenEd 2021
Harvard GenEd 2021
Apr 06, 2021
In Thoughts from Learners
In his interview, behavorial economist David Laibson emphasizes the importance of understanding estimates of uncertainty as both point estimates and distributions of possibilities. In particular, these distributions may have tail events (i.e. low probability events) that happen more likely than expected by the models. My question is if tails of the distribution on events are fatter (i.e. more likely) than expected, then why are models not being corrected so that the tails are more accurate? For example, tail events in the stock market, such as market crashes or bubbles, may be more likely than expected than in a symmetric Normal distribution of events. Or is the notion of a tail event an example of bias or small sample size? Because we observe that some event with very low probability occurred, we may think it should have higher probability, but our sample size may be too small to compare the empirically observed frequentist probability to the theoretical probability given by the model.
0
1
8
vincentli
Harvard GenEd 2021
Harvard GenEd 2021
Apr 05, 2021
In Thoughts from Learners
In his interview, psychologist Dan Gilbert describes how humans do not have a great understanding of uncertainty. They usually cannot differentiate between 2% and 5% chance of an event happening. This is similar to the observation that people cannot easily comprehend large numbers, such as the difference between 1 billion and 1 trillion. As a result, appealing to statistics of uncertainty may not be the best motivator for human action (e.g. toward counteracting climate change). Instead, Gilbert states that the best way to convince the general public to take action is to directly tell them what they should do. To better understand the incentives of these direct commands, we can consider the connection between prediction and happiness. Gilbert claims that even if you can perfectly predict the future, you will not always know how much you will like it. This suggests that rather than sharing statistics of uncertainty, understanding what makes people happy and convincing them that certain actions will maximize their happiness can potentially help to motivate concrete action toward problems such as climate change.
0
0
5
vincentli
Harvard GenEd 2021
Harvard GenEd 2021
Apr 01, 2021
In Artificial Intelligence
In his interview, Ben Schneiderman emphasizes the importance of viewing machines as tools for humans, rather than partners. He contends that machines are meant to empower humans, not replace them. Moreover, he mentions that if you don’t understand what a complicated machine is doing, you should make it stop. However, I’m curious what he would say in response to the following question: should the machine be stopped, even if it is working to improve people’s lives (i.e. maximizing utility in the sense of social welfare)? Do the ends justify the means? While we may understand the machine as it is continuing to run, this is not guaranteed, and the machine may transcend human intelligence. A natural follow-up question is: Even if we continue stopping complicated machines when we don’t understand them, is “superintelligence” inevitable? And if so, how can we be preparing for it? The view of machines as tools does not seem to promote a respectful relationship between humans and superintelligent machines. In his book Superintelligence, philosopher Nick Bostrom discusses ways to reduce the threat of AI that is malicious to humans. He proposes two categories of approaches: capacity control (limit the capabilities of AI) and motivation selection (encourage AI to preference human interests first). Stopping machines that get too complicated or powerful to be tools falls under capacity control, but I wonder if even despite our best interests, there may be some breakthrough that leads to budding superintelligence. After all, I think that being too strict on stopping complicated machines will greatly hinder technological progress, which society does not seem to want to sacrifice. Motivation selection thus seems to be a more promising approach. Perhaps, any view of AI must take into account both approaches, viewing machines as tools but with the (potentially inevitable) possibility that they will become our partners and peers one day.
1
2
17
vincentli
Harvard GenEd 2021
Harvard GenEd 2021
Mar 30, 2021
In Earth
I would want to follow up the interview with Dan Kammen with the following question: Armed with interdisciplinary knowledge about climate change, what are tangible action steps people can take right now to contribute to addressing climate change? Kammen mentions that reducing your carbon footprint can involve buying clean energy, reducing driving, and changing your diet. But for many people, these are still quite far-removed, abstract goals. I would have asked Kammen to elaborate on these points and offer concrete action steps, such as “take one day off per week of no driving”, or “contact your local utility to learn about the source of your energy and/or switch to a more renewable utility.” This would give people a direct call to action to make change right now.
0
1
15
vincentli
Harvard GenEd 2021
Harvard GenEd 2021
Mar 30, 2021
In Earth
In her interview, Gina McCarthy offers a refreshing perspective on the balance between truth and action in solving societal challenges, such as climate change. She states that it is not ideal if the public mistakenly attributes extreme weather events solely to climate change, even if this viewpoint galvanizes them to action to prevent climate change. It is important that the public understands the science and statistics of climate change, for example, the fact that the extreme weather events become more common rather than they cannot occur in the absence of climate change. This would reduce oversimplified alarmist views of worst case scenarios and deter politicians from misconstruing scientific reports to their own agenda. Moreover, a better public understanding of statistics and science would foster a greater trust and appreciation by politicians for scientific accuracy in policy determination, leading to more rational decision making.
0
2
18

vincentli

Harvard GenEd 2021
+4
More actions
bottom of page