I watched the Ben Shneiderman interview, recorded in 2018. You can access the video here: Shneiderman interview. More than a clairvoyant of the future of AI, Ben also exemplified the relationship between humans and machines, called HCI (Human-Computer Interaction). Over time, Ben raised the preoccupation with the overconfidence that people could have in algorithms, not relying on real-life variables to train models and systems, like the Google Flu. This one was on-point, a project that, with data and selected data, tried to predict flu outbreaks, but failed due to its over-reliance on correlations. Despite a promising start, the initial results were encouraging, indicating that search queries could forecast outbreaks before they were reported in hospitals. Other indicators, such as tissue sales at Safeway, also showed potential. This generated enthusiasm for the possibility of allocating public health resources more effectively. However, Google Flu did not expand its database, shifting user behavior. Over time, the predictions of this tool became inaccurate, leading to poor resource allocation. Then, Google removed the website.
This is just one example of overconfidence in algorithms. In the last 7 years, since this video was recorded, AI and LLMs have changed a lot. Now, AIs can read medical exams, develop thousands of lines of code, and solve complex math problems. Additionally, I believe that the case with Google Flu has shown us more than only an overconfidence in our system and the intelligence in computers, but that this exponential run to develop the most capable LLMs has become the target of the decade. Google Flu, as Shreiderman pointed out, failure to compile the amount of data available. However, day after day, we can see new systems predicting and exploring topics in seconds, compiling thousands of gigabytes of data in seconds. What will happen when you overtrust these new AIs again?