I watched the AI interview with Ben Shneiderman and the two undergraduates (https://www.labxchange.org/library/pathway/lx-pathway:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954/items/lx-pb:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954:lx_simulation:997b23d6?source=%2Flibrary%2Fclusters%2Flx-cluster%3AModernPrediction&fullscreen=true). For me, the most gripping part of the interview, that I will probably remember in one year’s time, is the statistic that there were 31 robot deaths in the US in a certain year, but only 2 of them were understood. I think this statistic is extremely chilling at a time when Boston Dynamics is creating very able kill-bots, AI is approaching more and more autonomous states (see the autonomous software engineer AI Devin, for example) and it makes me think that in 1 year’s time, robot-based deaths may become a societal concern.
I believe several aspects of the interview will have profound effects on my future and society’s future. I think the blanket categorization of all HCI innovations as “AI” may become relevant shortly, as many regulations about AI are coming out. For example, let’s say that it is ruled that AI cannot train based on copyrighted sources like the New York Times. Well, what would regulators define as “AI”? I bet very few of them are well versed in linear algebra and machine learning. Of course, AI in general is currently and will have many varied effects on society, although it is not as prevalent as many would think today. Only 7% of companies’ AI initiatives in 2023 paid off, I saw a study that said.