In Ben Shneiderman's interview with Prof Goodman he discusses the potential ethical issues in Microsoft algorithms to which Goodman argues these early technological advancements are just precursors to the actual operable technology. While I have no doubt that Tesla's current self driving cars can be appropriately compared to the Apple Newton, I am concerned with the idea that the ends justifies the means. As Grace indicated in her post, excessive reliance paired with intrinsically discriminatory AI for example, raises concerns about unfair hiring practices, and as such the damage of improving these systems in public contexts could presage serious social harms. I felt that Ben appropriately linked the tool or instrumentality of AI to the creators underlying them--primarily large corporations--who should first and foremost be held accountable, morally and financially, for the damages they incur. As more and more people begin to understand the implications of AI automation on neoliberal capitalism, I believe points like those made by Shneiderman will grow even more salient.
Interview here
Listening to professor Church discuss the issues of personal genomics really opened my eyes to our intensely personal relationship we all have to our genetics despite our amazing ability to model them on large scales. If I had given the interview I would have asked professor Church about how the nature vs nurture debate has evolved in light of advances in gene sequencing and modeling. Further I would love to understand better how possible modeling future trends in genomes is or could be in the coming years. Interview here