Ben Shneiderman's explanation detailing the purpose of machines, categorizing them as tools rather than partners, reminded me of a story that my grandmother used to tell me growing up about her experience of first being taught to drive a car. Now, as Shneiderman and my great-grandfather would agree, a car is a tool that mimics the function that our legs serve to us. Where my great-grandfather differs, however, is in his famous cautionary quote to my grandmother that "this car is a killing machine." This moment between my great-grandfather teaching my grandmother to drive has similarities to the way we can begin to view this rapid adoption of AI in our daily lives. While ChatGPTing a recipe for dinner will not kill you in the same way that a car has the potential to, its usage and outputs carry specific consequences. Being too sucked into the allure and the mimicry that AI brings to the table can cause you to forget the fact that this technology is still just a tool. Which is why drivers that get too comfortable rounding the turn close to their home tend to get into the most accidents. Even self-driving cars get into accidents because they mimic human cognition and decision-making. Aside from the car analogy, I have read that lawyers have used ChatGPT in the court of law, only to find that it "hallucinated" fake cases, unable to be tried. This, again, is because it is a tool and not yet able to tap into the cognition, logic, and decision-making of a sentient human being. Shneiderman's prognostication regarding the fact that AI is just a tool is still correct, but the optimism shown by the two students who joined him should speak to us all. To what extent does a tool become so good that it is no longer a tool, a machine of mimicry and consequence that can eventually ascend into its own being, lifting humanity high above its daily troubles?
top of page
bottom of page