I chose to watch the Philosophy and Prediction Interview with Ned Hall, in which he and Alyssa Goodman discuss questions involving both philosophy and science, such as how we can truly make a prediction or find "truth" in a chaotic universe. I chose this interview to contextualize what we have been discussing in class thus far. Creating new systems of predictions and refining these systems to get the most accurate results can have large consequences, and it is always important no matter the field to step back and discuss the ethical implications of not just your results but also your processes. It's easy to get caught up in the potential beneficial outcomes, like being able to predict the weather or knowing whether or not you'll pass on genes for a certain disease. This could lead to you to forget to analyze the negative consequences, which is why I chose this interview.
One thing that I found very intriguing in this video was the discussion of how artificial intelligence is interrupting the process by which we form predictions. As per the attached image, which is taken from the video interview, there is a framework by which scientific predictions are made, starting with taking a phenomenon and observing it and ending with the explanation and resulting prediction. With AI, however, we can skip large parts of the process, as it can now be automated by computers. This is a great advancement that can make creating new predictions incredibly efficient, but as I mentioned, it's important to put it into context. As Alyssa mentions, using AI as a shortcut is perfectly fine when using it to predict what time you'll arrive at the destination of your road trip, but not so good when being used to make predictions about the future of the world without understanding what exactly it is that the algorithm is doing. Another example they raise is Harvard deciding who to give tenure -- you owe it to the professors, whether or not they receive it, to give them an explanation and discussion of how you came to the conclusion; you would never simply let a computer that is given data make a decision of such importance without understanding its process. Decisions of consequence also require lots of accountability. But this raises the question: what defines a decision of consequence? It is important in scientific practice and in everyday life to differentiate between what is deemed as "important" and what is deemed as "not important." There is no real formula for this, but it is necessary to consider the implications when following the predictive process and determining the role of AI in it. This is a concern that will become increasingly vital with future developments of artificial intelligence.