I thought the discussions regarding how we convey uncertainty in biomedical research to others (whether it be to the public, doctors, or even other scientists) is a very interesting question, as it makes us reevaluate our reliance on predictive models in medicine to act as the dictators of our future. I appreciated Dr. De Vivo’s and Dr. Kraft’s perspectives on this issue as scientists themselves, and I agree with their arguments that their research is inherently uncertain and that a predictive model has no way of guaranteeing that a result will occur because of the numerous inputs that are necessary to make such models in the first place. I really enjoyed this discussion as well since my final research topic for the course is in precision medicine, and I believe that the uncertainty of such predictive models underscores one of the main obstacles faced in the field. Although precision medicine itself is tasked with making treatments as specific and patient-centered as possible, it is interesting to consider whether or not such exactitude in the treatment process leads to further uncertainty regarding the true applicability of such results in every scenario. Even if we were to be making predictions of health for a singular patient, there are many factors that go unaccounted for when creating these predictive systems with a lot of time-consuming and resource-intensive research being undertaken to truly understand the effect of each variable in influencing your likelihood of developing a certain disease. Such a complex interaction of circumstances makes it seemingly impossible to be 100% certain about the onset of an illness in a patient. I think Dr. Kraft and Dr. De Vivo explained it best when they discussed how these prediction models are not meant to make decisions for the patient, but rather provide additional, more precise information that can inform the future actions of a patient.
One question I still had remaining for Dr. De Vivo and Dr. Kraft was regarding the issue of privacy and security for patient data. As someone who has a background in cybersecurity and interested in understanding the applications of the field to healthcare systems, I was wondering how the issue of de-identifying patient information can potentially affect research that relies on patient data. For example, from previous conversations I had with statisticians and geneticists that work with patient information, I learned that some clinical studies are not even able to use some datasets, because even after de-identifying the patient information, some pieces of their data were so unique that it would have been possible for another individual to re-identify the information if it was published. However, if such information is not incorporated into the model, then there would be a huge bias in the results that came from not including such outlier information. Therefore, I am wondering how Dr. De Vivo and Dr. Kraft would go about balancing between patient privacy concerns and the accuracy of their own research if presented with such dilemmas in their own work.
Although I watched the interview with Susan Murphy and Brandon Meade, I thought it would be interesting to watch your video and see what you did since we were put in the same group. I enjoyed reading your reflection because of the way you engage with the topic of uncertainty in biomedical research and reflect on its implications for precision medicine. You did an excellent job of summarizing the key points made by Dr. De Vivo and Dr. Kraft in the discussion and offering thoughtful insights.
I think your question about the issue of patient privacy and security of patient data, which is a complex issue that requires careful consideration by researchers in medical research, is a very interesting one and I'm very curious to know a possible answer to it. In the Interview with Susan Murphy and Brandon Meade, Susan talks about their studies to predict stress in individuals, especially those who are trying to stop a habit and to determine when when stress would set in, they give them wearables as detection devices but since privacy is always an issue in these kinds of studies, they tend not to track the locations of these individuals but rather would just know if the person is at home or work without knowing the exact location of the workplace or the house of the participant. She also mentions that some of the data they receive from individuals are intentionally taken off their phones immediately in order to protect them so I think patient privacy is indeed a very important factor in research studies but often could lead to inaccuracies as you mentioned earlier.
I loved reading your reflection especially since you tied it to your final project topic! I agree that health scenarios are unique to each individual and it's definitely difficult to create a predictive system that would encompass the trend or similarities to predict the likelihood of developing a disease. I found your question really insightful as well because there's always a trade-off between uncertainty and bias, and removing the kind of data you mentioned would definitely be prone to skewing the data and make the model more biased. Models like these should be able to especially help out these unique patients but would actually work against them since their data isn't reflected.