A year from now, the thing I will remember the most from this interview is the characterization of AI and other pieces of technology as tools, not partners. This is important now more than ever, with language models that mimic human speech and attempts to build AI systems that mimic humans entirely. As this development continues, I believe that this is something that's very important to keep in mind to prevent any "allegiance" to these models. This question of naming because very interesting when looking at the potential of AI consciousness. If we are able to determine that an AI is conscious, would it be appropriate to call it a partner?
Similar to above, the interview discusses how we ought not to construct machines that mimic or replace humans, but instead build tools that empower people. Speaking to society's future, the current state of AI technology does support the idea of human replacement in many different forms, rather than simple enhancement. However, I think the line between what we could call "enhancement" and "replacement" may be blurry. If we grant an enhancing tool to an individual that gives the productivity of ten other individuals in the same work place, leading to the lay off of those ten workers, is that not a form of replacement? I believe that ideas of enhancement and replacement/mimicking of humans are very interrelated, and, depending on the context of application, inseparable. For example, if we create an AI language/voice model that serves as a customer service point of contact for a company, there is a direct mimicking of a human occurring. However, we are enhancing the customer service branch of whoever owns the company. These kinds of replacements, whether they are mimicking or enhancing, grow very pertinent as we develop AI models.