Like many others, discussions surrounding ChatGPT have caused me to look closer into AI and the implications of their current design. In particular, after reading "What Makes AI Chatbots go Wrong?", I was interested in some of the psychology behind the different levels of trust/distrust that people have in these systems. From this, I got to an academic article: "Anthropomorphism in AI". This details the social consequences of the way we talk about AI in research and use. Ben Shneiderman frequently mentions in his conversation that the language we use in regards to using AI is important, and this article emphasizes that even more. It's important that we stray away from portraying that the software has emotions and free will due to the importance of differentiating a human brain from AI. The real world implication of this is understanding that certain biases in AI responses are at the level of the developer and the data that the AI uses as input rather than biases actually learned from existing in the world.