A year from now, the most important thing that I will remember from this interview is that we have to view AI technology not as replacements or mimics of human actions/abilities, but rather as enhancers. In other words, our focus must be placed on how to develop AI technology applications as tools to humans, rather than mimics of human capabilities. Given the current pace of innovation in the AI space right now, I argue that this point is crucial to keep in mind, as many applications are attempting to "copy" the abilities of humans, and this may lead to a concentration in development that is not optimal.
I believe that the discussion of varying degrees of concern that we have for different applications of AI will be the most relevant aspect of the interview in the future. As we have seen, the government is attempting to implement regulations on AI; however, it is important to make a distinction regarding which AI applications warrant regulation and guidelines. I agree with Shneiderman's argument that we should not be as concerned about AI in applications where the consequences aren't as severe; however, we should be concerned about AI in applications where consequences could be severe and significantly damaging. Thus, I propose that regulations should follow this line of thinking, and be focused on applications which could pose a danger if the technology fails to perform as intended.