I think what I will remember the most from the interview is Shneiderman's discussion of AI ethics. It is very interesting to hear and think about where AI should be allowed to be utilized and what percent effectiveness it should need to have. For example, driverless cars should definitely not be allowed if they crash even just 1% of the time. I think I will very frequently think about the ethics of AI, especially as it becomes more widespread and if it is accurate enough for my personal use.
I believe that over reliance on AI and oversights of its potential failures will have a large impact on my own life and society's future. Shneiderman discusses the danger in just mindlessly accepting AI as being accurate and foolproof. It is very easy to see AI as a perfectly accurate resource that has more access to information than us and knows more. As AI progresses to be even better, people will likely rely on it much more. This could become a problem for me as well as others if we trust AI to accomplish an important task it is incapable of doing properly. For example, people should not ride in driverless vehicles until we know they are just as safe if not more safe than cars today. It is essential we understand AI's limits and strengths so that we can best utilize it.