Forum Posts

Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Apr 27, 2021
In The Future of the Future
When watching the interview with Stuart Firestein, I would have liked to have asked him what he believes will be the future of and with brain-machine interfaces. It is something that really scares because of how easily we could screw up in the process and have horrible repercussions. Some things that come to mind are microchip cybersecurity (imagine being able to have your body hacked); how having information fed directly to our brain could undermine human agency, since we would be more integrated in the "algorithm," and perhaps would have even less choice in what we consume; and having something that is in your brain collecting information about you without you knowing (data mining). Those are things that already happen today with conventional computers and their interface with human and human information, so it is reasonable to assume that they could also happen with implanted microchips. So, back to my question, I would have liked to have asked him what he believes the potential of brain-machine interfaces are, and supposing that they could be used in a computer directly implanted in the brain, how could we go about making sure that the worst doesn't happen.
0
0
3
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Apr 27, 2021
In The Future of the Future
When watching the interview with Stuart Firestein, the thing that surprised me the most was when Firestein said that throughout civilized human history, the most advanced technology at the time was always compared to the brain. I find that absolutely fascinating. Beyond the fact that the info is extremely interesting by itself, because of how "obvious" it feels once you hear about it for the first time and because of its elegant simplicity, I think it also highlights how self-important we are as a species. First of all, there is the fact that we are using ourselves as a benchmark for innovation, while there are other, perhaps more accurate, benchmarks out in nature (one of the main principles behind biomimicry). Aside from that, there is also the idea that those two things are even comparable, which is a gross undervaluing on how complex our brain really is and overvaluing how complex our man-made systems are. It is laughable that we ever compared our brain with clocks or hydraulics, but our descendants might be the ones laughing at us in the future for putting neural in neural networks.
0
1
7
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Apr 22, 2021
In Thoughts from Learners
When watching Meade and Murphy's interview, one of the things that stood out to me was that Meade said that the math we use might simply be "outdated," or fulfilled most of its potential, meaning we need entirely new mathematical tools to properly model and understand the systems we are dealing with. What is even more worrying, according to him, is that theoretical mathematicians are not as ahead of experimental science as they used to be in the past. Now, in great part because of Machine Learning, we will get a "result" to some problem without fully understanding the mathematics that goes into creating that result itself. I hope we are smart enough to figure something out, and not get stuck with the same problems forever.
0
1
9
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Apr 22, 2021
In Earth
When watching Meade and Murphy's interview, one of the things that stood out to me was that Meade said that we do not have adequate algorithms or essentially rules to predict earthquakes accurately. I would've wanted to ask him if he believes that is because our current mathematical tools (namely DEs) don't satisfy the properties of the systems we are studying, or because there is simply not enough data yet (pre-earthquake) to properly draw conclusions from, or a combination of both. Based on that previous question, I would have wanted to ask him in what ways we could hope to solve that problem, and some concrete steps forward the scientific community (in general or specifically the portion that studies earthquakes) should take.
0
0
6
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Apr 13, 2021
In Thoughts from Learners
During Avi Loeb’s interview, he mentions that he believes that scientists should be more transparent about their findings, making information public before it is fully confirmed, with the obvious disclaimer that it is not a consolidated finding yet. I would've liked to ask him how transparent scientists should be when addressing the public – not in an ideal scenario, but in our current world. Should scientists say that almost all models are wrong, and that we can never be sure of anything, due to the epistemological nature of science? While I believe most educated people will recognize that even if the models are not 100% right, the rigor required by scientific research makes those findings or principles more correct than common sense or some random thing they happen to believe in. The problem would arise with “deniers” of all kinds, who would likely think that even if science says they are wrong, because science is “inherently” wrong, then they are as correct as the scientists, which is just plain wrong. It’s an obvious fallacy, but that doesn’t mean that people won’t think like that, and it doesn’t mean that their actions due to that worldview aren’t harmful. So, I would have wanted to ask him how should the scientific community tackle the issue of transparency, as to not decrease the credibility of science in the eyes of the general public.
0
2
11
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Apr 13, 2021
In Thoughts from Learners
When listening to Avi Loeb’s interview, I thought that his idea that blue skies research is much more powerful than practical research was very interesting. It’s something I’ve heard before, but I hadn’t really thought about it in a long time, as I am dedicating myself to the more practical side of things (engineering). I think it makes a lot of sense if we look at it from a longer timeframe, or the returns that a certain line of research will give us from now to an indefinite future. While in the next 5, 10 or even 20 years, most of the changes we will see will come from practical research, blue skies research will provide much more powerful changes in a longer timeframe, such as 50 years in the future. For this comparison, I am considering research that is going on right now and their impact – Tesla’s self-driving cars vs research in the LHC, for example. It makes even more sense if we think about the prejudice that everything that applies to the past will also apply to the future. Practical research in general tries to use the scientific principles we already have to solve a certain problem, while blue skies research is more focused on finding new principles or confirming the ones we already have. Thus, it is blue skies research that allows for us to have the huge paradigm shifts that allow us to develop GPS tech, for example. It’s just a shame that most blue skies researchers likely won’t see the full repercussions of their findings during their lifetime.
1
0
5
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Apr 06, 2021
In Wealth
In David Laibson's interview, he mentions the idea of Occam's razor in assessing how likely a theory or explanation is to be correct. I thought it was interesting because our models have a tendency to get progressively more complicated with time, when we introduce even more subtleties in the model. That got me wondering, is Occam's razor really a valid way to "interpret" the world, or is it a tool to keep us from tripping over ourselves? Many times in "nature," simpler != true, but it is a fact that the more convoluted a theory or explanation is, the less likely it is for us to adequately understand and use it. I don't know, just something that I thought about while watching the lecture.
0
0
1
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Apr 06, 2021
In Wealth
When I watched David Laibson's interview, there were two main questions in the back of my head that I wish I could ask him. First, it would be if behavioral economic models introduce any layer of complexity beyond a traditional economical model. This could be both through the addition of inputs, or having the models be nonlinear, instead of linear (these are only examples, as I have little to no idea how any models beyond the supply-demand curves work), or any other modelling change that introduces some added difficulty in using the model. Then, supposing there are indeed some drawbacks of using a behavioral economic model, when does it make sense to use them, and when does it not? If we are modelling climate change and want an economic input, which might have some feedback with both the climate itself, and the output of our model (prediction), would it be better to use a simpler, traditional model and account for the uncertainty, or would it be better to go "all out" with a behavioral model, and likely have less uncertainty but more complexity? Of course, that will depend on what the model is being used for (as a model being adequate or not depends on its objective), but is there some clear "transition" point? If there are economics concentrators who could answer my questions (at least in a conjecture level) or think that they do not make any sense, please say so.
0
1
6
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Apr 01, 2021
In Artificial Intelligence
If offered the opportunity, I would have loved to ask Schneiderman how our (yours, actually) education system would have to adapt to accommodate living in a world with AI. Many schools already have computer labs, teach some form of data visualization, basic statistics and how to not be fooled by propaganda (at least it is part of the Brazilian curriculum, in theory). However, the ethical and technical "dilemmas" or concerns seem a little different. There is the question of privacy, accountability, how powerful are they really, could they substitute humans, how they really work (beyond the level of magic black box that can learn better than humans, which is very wrong), and many other misconceptions that appear in the media, social media or in casual conversation. So, I would like to ask him how does he believe that the education system could do a better job of creating people that are able to better navigate a world where AI is a common tool.
1
2
13
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Apr 01, 2021
In Artificial Intelligence
In Ben Shneiderman's interview, two things really peaked my interest: the semantics of the discussion, and the accountability and responsibility question. I will focus on the former, but I want to state that I also thought the idea of AI accountability was incredibly interesting and relevant, as it is a complicated, subtle but deeply impactful question. Onto the linguists and semantics itself, I thought it was a really interesting thing to be considered, as words relating to AI have simply lost their meaning, as generally happens when there is buzz around them. Synergize, life coach, empowerment and all other sorts of buzzwords have lost their meaning after being banalized; and the popularization of AI has had the same effect. Computers and algorithms becoming "partners" rather than tools, machines "learning" instead of predicting are all misrepresentations that inflate the role and power of AI. Schneiderman probably has this attention to language both as a result of his humanistic family environment and his work in "risk prevention," as language in that field can be heavily judicialized, an area in which semantics are a very relevant aspect. I already try to pay attention to my choice of words (mainly the technical ones) when I am writing or speaking, but from now on I will even more attention, and really think about what the words that I am using mean colloquially, what they really mean in a rigorous sense, and if they properly reflect the message I am trying to say.
1
1
14
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Mar 30, 2021
In Earth
I wish I could ask all three of them this question. One of the points that was never fully addressed in any of the interviews was how corruption and overall government inefficiency (more in the administrative sector and in all levels) affected the endeavors in slowing climate change or climate change mitigation. This question is extremely relevant to me because while we heard a lot of success stories in the interviews, we did not really hear about the places where people are not doing enough (at least not in depth); and Brazil is a country known (at least by Brazilians themselves) to have an immense ability to not do things properly. So my two-part-ish question would go something along the lines of: are there any success stories coming from places with historically bad public policy? Be it because of corruption, inefficiency or incompetence from policymakers. Also, if there have been, what seemed to the trigger for it? The overall population, specialists, some great politician, the private sector, the "third sector" or "voluntary sector" (NGOs or other non-profit companies), or some combination of these? I would likely ask this question to Dr. Dan Kammen, as from his interview it seemed that he had a lot of experience in examining other place's responses or solutions to climate change. Here is the link.
0
2
21
Pedro Duarte Moreira
Harvard GenEd 2021
Harvard GenEd 2021
Mar 30, 2021
In Earth
One of the most interesting bits of info presented to us was that scientists seemed to underestimate uncertainty by around 50%, as said in the Dan Kammen interview. This caught me a little off guard and at the same time made total sense. Essentially, the scientific method's rigorousness is what makes it our prime framework for obtaining or refining knowledge. Because of that, we could either expect that scientists are their own worst critics and would therefore overestimate the sources of error or uncertainty (by being harsh in the evaluation of their own model, data collection or simulation computational power), or that they themselves would fall under the "myth of scientism," intensified by the fact that they have emotional attachment to their hypothesis and data. It seems that the latter is true, of course with the distinction that the inaccuracy in uncertainty prediction for scientists likely falls into a curve -- with some overestimating or underestimating uncertainty, the latter being the overall trend. Also, I believe there might also be something to be said about scientific articles publishing companies demanding statistically significant results, and how underestimating uncertainty might factor into that. Either way, very interesting data, and something I should think about myself whenever I am designing something. Here is the link.
0
0
7
Pedro Duarte Moreira
Harvard GenEd 2021
+4
More actions