After reading the articles, it made me less optimistic and more skeptical about Chatgpt. These articles were very insightful and I now have a better understanding of how AI chatbots operate and how they're created. Interestingly, I came across a post a week ago on one of the college's pages and it was about ChatGpt. Professor of psychology at Harvard, Steve Pinker, said that " we should worry about Chatgpt but we should not panic about Chatgpt because it's an Ai model that's been trained with half a trillion words of text and it's been adapted such that it takes questions and in return takes its best guess of stringing words together to make a plausible answer. We could worry about disinformation produced on a mass scale but we will have to develop defenses and skepticism and we also have to worry about taking the output of the chatbot too seriously because it doesn't know anything, it doesn't have a factual database, it doesn't have any goals towards telling the truth. It just has a goal of continuing the conversation and it strings words together often without any regards to whether they correspond to anything in the world and so it can generate a lot of flapdoodle quickly if people rely on it as an authoritative source of the factual state of the world, they will often go wrong." I was most surprised by the fact that these AI models can develop patterns and exhibit features which are not intentionally embedded as part of their features and characteristics so it makes me question what could come out from these unusual behaviors and patterns. Could it help us solve most of the problems we face today with limited human aid or could it attempt to wipe off human existence?
I'm also fascinated by the biases existing in Ai models. A video I watched on youtube covered how chat gpt is politically biased and it's found that the chatbot is:
* Against the death penalty
* Pro-abortion
* For a minimum wage
* For regulation of corporations
* For legalization of marijuana
* Pro gay marriage, immigration, sexual liberation, environmental regulations and for higher taxes on the rich
According to the article posted after the study, the article stated that chat gpt thinks "corporations are exploiting developing countries, free markets should be constrained, that the government should subsidize cultural enterprise such as museums, that who refuse to work should be entitled to benefits, military funding should be reduced, that abstract act is valuable and that religion is dispensable for moral behavior".
For now I would assume that these opinions and biases were unintentionally caused in the training process of the Ai chatbot but I'm worried about how it can affect generations. Unintentional biases can develop over time as a result of exposure and familiarity with certain things or people so as people and especially students keep using these softwares, could it alter how they perceive things in a negative way? Should Ai chatbots like this be banned just like how Italy banned chat gpt?