In his interview, Ben Schneiderman emphasizes the importance of viewing machines as tools for humans, rather than partners. He contends that machines are meant to empower humans, not replace them. Moreover, he mentions that if you don’t understand what a complicated machine is doing, you should make it stop. However, I’m curious what he would say in response to the following question: should the machine be stopped, even if it is working to improve people’s lives (i.e. maximizing utility in the sense of social welfare)? Do the ends justify the means? While we may understand the machine as it is continuing to run, this is not guaranteed, and the machine may transcend human intelligence. A natural follow-up question is: Even if we continue stopping complicated machines when we don’t understand them, is “superintelligence” inevitable? And if so, how can we be preparing for it? The view of machines as tools does not seem to promote a respectful relationship between humans and superintelligent machines.
In his book Superintelligence, philosopher Nick Bostrom discusses ways to reduce the threat of AI that is malicious to humans. He proposes two categories of approaches: capacity control (limit the capabilities of AI) and motivation selection (encourage AI to preference human interests first). Stopping machines that get too complicated or powerful to be tools falls under capacity control, but I wonder if even despite our best interests, there may be some breakthrough that leads to budding superintelligence. After all, I think that being too strict on stopping complicated machines will greatly hinder technological progress, which society does not seem to want to sacrifice. Motivation selection thus seems to be a more promising approach. Perhaps, any view of AI must take into account both approaches, viewing machines as tools but with the (potentially inevitable) possibility that they will become our partners and peers one day.