In the video with Ben Shneiderman, he discussed Google Flu Trends, which was something I had never heard of before. In 2009 Google began to predict the way the flu breaks out in different cities around the country by analyzing Google search data from individuals (i.e. if people search for tissues, flu symptoms, etc.). This seemed to be a decent early predictor of flu breakouts; unfortunately, over time, it began to lead to incorrect predictions that then led to a poor allocation of resources from public health officials. Four years after its inception, Google removed the Flu Trends website. In this example, Schneiderman illustrated the algorithmic hubris of those at Google (and elsewhere) who assumed that Google Flu Trends would be a fantastic, long-term predictor of flu outbreaks. This was a really fascinating segment of the video, and I am very glad that I got to learn about the failure of Google Flu Trends from Ben Shneiderman. https://www.labxchange.org/library/pathway/lx-pathway:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954/items/lx-pb:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954:lx_simulation:997b23d6?source=%2Flibrary%2Fclusters%2Flx-cluster%3AModernPrediction
In the segment discussing John Snow and his role in epidemiology, Professor Goodman remarked how some scholars do not consider Snow to be the father of epidemiology, due to the fact that his study design failed to contain a control. In response to this, Megan Murray stated that she judges studies on how effective they are (thus illustrating her admiration for Snow). Clearly, in the case of John Snow, his now-famous study certainly was effective and did work. However, when I heard this response, a question immediately came to mind that I would want to ask Megan Murray: would you be critical of the lack of a control in John Snow’s cholera study if the study was not wholly effective? It seemed like she was following a very results-based approach in the video, and I am interested to hear more about that.