If I had conducted the interview with Ben Shneiderman, I would have delved more deeply into the discussion of AI and the rate at which new innovations are developed. I found it insightful that Shneiderman felt that the development of new innovations is too rapid and stressed the importance of building in protections. However, he did not provide an example of what these protections might look like. I would also like to learn more about how innovations in different industries might be able to accommodate different consumer preferences. Although innovations with loan mortgages are likely approximately the same for all consumers, people might have different levels of risk tolerance when it comes to medical devices for example. Further, what might the tradeoffs look like between the convenience and increase in accessibility that might come with new innovations and the risk of innovations not working in the way they are intended? Is there a way to quantify the risk of an unintended outcome actually happening? I also would have asked Shneiderman to include a few examples of protections for new innovations. My final question would be to ask Shneiderman how protections might be able to accommodate people who are comfortable with different levels of risk.
Here’s a link to the interview with Ben Shneiderman!
Hi Emily, I think these are great questions to ask! I'm also interested in what sort of regulations or "protections" may be put in place to slow down the development of AI, allowing it to develop at a safer, more controlled and understandable rate. As to when these protections may be enacted, I personally think that it'll take AI doing something negative or dangerous before people take these considerations seriously and decide to enact change. Humans tend to respond stronger to negative stimuli than positive stimuli, so it's likely we won't make a change until AI actually starts becoming problematic.