top of page

Forum Comments

Response to Genetics, Global Health, and Epidemiology with Dr. Immaculata De Vivo and Dr. Peter Kraft
In Health
Desmond Cudjoe
Harvard GenEd 2023
Apr 13, 2023
Although I watched the interview with Susan Murphy and Brandon Meade, I thought it would be interesting to watch your video and see what you did since we were put in the same group. I enjoyed reading your reflection because of the way you engage with the topic of uncertainty in biomedical research and reflect on its implications for precision medicine. You did an excellent job of summarizing the key points made by Dr. De Vivo and Dr. Kraft in the discussion and offering thoughtful insights. I think your question about the issue of patient privacy and security of patient data, which is a complex issue that requires careful consideration by researchers in medical research, is a very interesting one and I'm very curious to know a possible answer to it. In the Interview with Susan Murphy and Brandon Meade, Susan talks about their studies to predict stress in individuals, especially those who are trying to stop a habit and to determine when when stress would set in, they give them wearables as detection devices but since privacy is always an issue in these kinds of studies, they tend not to track the locations of these individuals but rather would just know if the person is at home or work without knowing the exact location of the workplace or the house of the participant. She also mentions that some of the data they receive from individuals are intentionally taken off their phones immediately in order to protect them so I think patient privacy is indeed a very important factor in research studies but often could lead to inaccuracies as you mentioned earlier.
0
0
Let's talk about AI.
In The Future of the Future
Desmond Cudjoe
Harvard GenEd 2023
Apr 04, 2023
After reading the articles, it made me less optimistic and more skeptical about Chatgpt. These articles were very insightful and I now have a better understanding of how AI chatbots operate and how they're created. Interestingly, I came across a post a week ago on one of the college's pages and it was about ChatGpt. Professor of psychology at Harvard, Steve Pinker, said that " we should worry about Chatgpt but we should not panic about Chatgpt because it's an Ai model that's been trained with half a trillion words of text and it's been adapted such that it takes questions and in return takes its best guess of stringing words together to make a plausible answer. We could worry about disinformation produced on a mass scale but we will have to develop defenses and skepticism and we also have to worry about taking the output of the chatbot too seriously because it doesn't know anything, it doesn't have a factual database, it doesn't have any goals towards telling the truth. It just has a goal of continuing the conversation and it strings words together often without any regards to whether they correspond to anything in the world and so it can generate a lot of flapdoodle quickly if people rely on it as an authoritative source of the factual state of the world, they will often go wrong." I was most surprised by the fact that these AI models can develop patterns and exhibit features which are not intentionally embedded as part of their features and characteristics so it makes me question what could come out from these unusual behaviors and patterns. Could it help us solve most of the problems we face today with limited human aid or could it attempt to wipe off human existence? I'm also fascinated by the biases existing in Ai models. A video I watched on youtube covered how chat gpt is politically biased and it's found that the chatbot is: * Against the death penalty * Pro-abortion * For a minimum wage * For regulation of corporations * For legalization of marijuana * Pro gay marriage, immigration, sexual liberation, environmental regulations and for higher taxes on the rich According to the article posted after the study, the article stated that chat gpt thinks "corporations are exploiting developing countries, free markets should be constrained, that the government should subsidize cultural enterprise such as museums, that who refuse to work should be entitled to benefits, military funding should be reduced, that abstract act is valuable and that religion is dispensable for moral behavior". For now I would assume that these opinions and biases were unintentionally caused in the training process of the Ai chatbot but I'm worried about how it can affect generations. Unintentional biases can develop over time as a result of exposure and familiarity with certain things or people so as people and especially students keep using these softwares, could it alter how they perceive things in a negative way? Should Ai chatbots like this be banned just like how Italy banned chat gpt?
1
0
Gina McCarthy Climate Change Interview
In Earth

Desmond Cudjoe

Harvard GenEd 2023
+4
More actions
bottom of page