In Ben Shneiderman’s interview, he spoke about many concepts that lie on the spectrum of theory and data. His main goal was to emphasize that AI and machine learning only work because humans create programs based on their own theory. This is why he warned that if an AI machine ever gets to a point where humans do not understand the theory behind it, then it should be thrown away, especially if its function is related to serious matters like medical surgeries, transportation, construction, etc.
However, Ben understands that some AI machines are like black boxes and it is not particularly necessary to understand the abstraction of such a machine if it does not involve serious matters, such as a chess robot.
One example of a shortcoming of a program that was more on the theory side was trying to predict the oncoming of a flu epidemic based on Google searches. The project's main theoretical premise was that there is a strong correlation between certain search terms and flu activity. However, this assumption did not fully account for the complexities of the online search engine, and therefore the prediction failed due to incorrect theory.
On the other hand, machine learning models more on the data side of the spectrum (having a “black box” algorithm) also have shortcomings. By relying solely on data and not fully understanding the algorithm, one may think that more data is better. However, by not carefully selecting data based on human theory, data can become biased, leading to poor results.
Therefore, based on Ben’s interview it is best advised to have a balanced approach by incorporating both data and theory-driven methods into a program. https://www.labxchange.org/library/pathway/lx-pathway:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954/items/lx-pb:53ffe9d1-bc3b-4730-abb3-d95f5ab5f954:lx_simulation:997b23d6?source=%2Flibrary%2Fclusters%2Flx-cluster%3AModernPrediction