The most interesting thing I heard in the Professor Shneiderman interview was the discussion of hubris surrounding AI in the context of the Google Flu fiasco. It appears that this is a classic example of computer algorithms being fallible in the AI world, but I was fascinated because I am so unfamiliar with anything to do with the computer science world. After a quick search, it appears that two of the major problems with Google Flu Trends were that spurious correlations found in the algorithmic training set threw off the results and that people who search for flu-related things online are en masse uneducated about medicine and therefore aren't very good at determining whether or not they have flu symptoms in the first place. This unreliable information is turned into inputs that are processed through a poorly-tuned computer algorithm that then generate unreliable results. The case study highlights some of the failings of relying purely on AI for prediction. It is dangerous for people to take the results of machine learning as gospel without further verification, and the implementation of computer "fairy dust" algorithms make it very easy to do just that. Instead of thinking of computers as magical machines we should think of them as tools that assist humans who are ultimately in charge of the predictive process. I wonder how that balance can be struck in a way that optimizes the power of computation without bulldozing the importance of some type of human understanding of the inputs and outputs of these complex equations.
top of page
bottom of page
Agreed, @Cici Williams , we need to LEARN, as humans, to learn and shape these tools.