Ben Shneiderman uses the Google Flu example to describe his fear that humans will overestimate AI's capabilities and then overlook its flaws, especially when it gives us incorrect data that we end up believing. This risk is extremely prevalent today. Because AI is improving so rapidly, we often get excited and over-trust it. For example, we often put our full faith into ChatGPT even though it gives us incorrect and sometimes completely made up information. People have reported that ChatGPT has invented sources and mixed up basic facts. It may seem like, from a day to day perspective, this isn't that big of a deal. However, the more we trust AI, the more we put it in charge of large decisions. For example, we may trust AI-generated data in medical procedures or even go so far as to implement autonomous decision making systems in weapons. In this situations, one mistake can cost a life. Shneiderman's worries are very relevant and are often discussed today. Many agree that we must be cautious with what we trust AI to do and keep human checkers at every step. Still, there is no perfect solution to this problem. As AI innovation speeds along, we must remember that nothing, not even AI, is perfect.
Watch the full interview here: https://www.labxchange.org/library/pathway/lx-pathway:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954/items/lx-pb:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954:lx_simulation:997b23d6?source=/library/clusters/lx-cluster:ModernPrediction