
During April of 2023, in Harvard's GenEd 1112, we'll talk plenty about AI. In this post, I'll include a very recent NY Times series students can draw upon (especially "Part 2: How Does ChatGPT Really Work?"), and also a New Yorker piece from 2015 where Nick Bostrom, Oxford's notable philosopher of our tech future, took a grim view of humanity's AI-dominated future. These articles will supplement discussions, in concert with student's own contributed media, along with the conversation with Ben Shneiderman about AI available on LabXchange as part of the Prediction Project.
NYTimes Subscriber 5 Part AI Series, March 2023
Here's a great 5-part series in the The New York Times. 1. How to become an expert on AI 2. How Does ChatGPT Really Work? Learning how a “large language model” operatest 3. What Makes A.I. Chatbots Go Wrong? The curious case of the hallucinating software. 4. What Google Bard Can Do (and What It Can’t) 5. 10 Ways GPT-4 Is Impressive but Still Flawed
What Nick Bostrom Thought, in 2015--is AI Doomsday for Humans?





I've referenced this article previously, but I still think that one of the most interesting sources having to do with the future of AI/ML is Noam Chomsky's op-ed in the NYT (link). Whilst I am a strong believer in the potential of ChatGPT, I also recognize it is still too early to make sweeping claims about how it will impact society at large, and I think that reading this article is an excellent exercise in realizing why we shouldn't "over-predict" a phenomenon, even if it does pose large ramifications for society. Ironically enough, Chomsky's explanation of the failure of AI to reach what defines human intelligence mirrors strongly with the fundamentals of this course - indeed, his paragraph discussing of AI's inability to process with explanatory frameworks feels as though it was taken straight from a writeup on the Padua Rainbow.
On the other hand, I will cast some mild criticism on Chomsky and suggests that his (rightful?) skepticism of grand claims runs so deeply he falls to the opposite form - a premature dismissal of AI/ML and some claims about its limitations I find slightly hasty. While he is certainly currently right to draw a firm line between ML methods and human language acquisition, it seems a bit unfair to compare the efficacy of a technology still in its relatively early stages with a gargantuan of a bioelectric processor with an evolutionary jump start of hundreds of millennia. As Chomsky claims, we have no idea how the miracle of human language acquisition under limited information works, but this seems to cut in his direction too - because we don't know what sparks actual language acquisition, we can't yet claim AI/ML is incapable of it. Likewise, while it is certainly true current neural network methods provide no clear explanation for produced answers, we have no way of determining that this will be a longstanding problem. My strongest objection is his stance that 1. moral intelligence is a prerequisite of intelligent thinking and 2. that AI has some sort of fundamental limitation on these questions. I think the (frankly optimistic) premise of 1 is sketchy at best, and that it is far too early to determine 2's accuracy.
In short, recognize an unknown for an unknown - feel free to hedge what you think your likely expectations are, but don't claim they're the definitive reality until the event is over.