In the interview with Ben Schneiderman, I was particularly surprised by his confident, repeated assertions that computers are just tools, and that they are no more intelligent than wooden pencils. Perhaps due to both movies or to my own research with the advancements made in machine learning, it seems to me that humans are going to keep pushing machines to the limit in terms of their ability to learn and apply knowledge, and I was under the impression that sometime in the future, machines would be able to grow and evolve on their own, much as human brains do. Hearing Ben Schneiderman’s interview, however, I am tempted to wonder if this obsession with continuously making machines more and more human-like is misguided given their possibly limited potential, and if there would be a clear, obvious point at which it was clear that the advancement of AI had hit a ceiling.
top of page
bottom of page
And, yes, my (AI-enhanced)car, has a name. Humans just want to be friendly...
I also thought that it was really interesting, and I was also surprised by how much he focused on that point. If I had to guess, I think his fixation on it is (in one way) the antithesis to our fixation on personalizing things. Kids draw the sun with a face, people give a personality to their cars, and ancient civilizations would represent natural, inanimate objects or phenomena as deities with human traits. He wants us to rest assured that we are dealing with algorithms, code, machines doing operations, and only that. Another thing that he could be doing is, as you said, trying to steer us away from the goal of developing "cybernetic humans," which is probably unachievable. Even if it were achievable, we would reach a plethora of ethical, judicial and maybe even logistic problems. If they reach a consciousness similar to ours, do they deserve the same rights as we do? How much can we legally change their code? Can we not "own" AI anymore, because it would be a form of slavery? How would we deal with having another "intelligent lifeform" with us, inhabiting Earth (the last time that happened, it did not end well for the Neanderthals, as some theories suggest)? It would be probably better to avoid this problem altogether, and leave human things for humans to do, only using AI as a tool, whenever needed (as not every problem requires AI).