One piece of this interview that I will remember a year from now is the way in which we use language to describe AI. Words like "partnership", "learn" and "know", are damaging in that they paint machines as being human-like.
The idea that AI is a tool rather than human-like is really important for society's future. Schneiderman argues that animism is an obstacle that delays the success of technologies, and cites a variety of past examples in which technologies (such as a bulldozer) were perceived as being human in features. Looking back, we certainly perceive such technologies merely as tools, yet I would argue that animism is a key aspect in the formation of new technologies, as it is natural that we gain inspiration from our own bodies. I would agree though, that we must stop perceiving the predictions of AI as being magical or creative, if we are to properly understand its effects and capabilities in the future. To view AI as a tool created by humans also places more (and very necessary) responsibility upon those who develop AI.