
https://openai.com/index/sycophancy-in-gpt-4o/ Last week, Open AI released -- and then rolled back -- a controversial new Chat GPT model. Users began to notice that the chatbot, which already inclines towards cheery validation in its tone, had become excessively, obsessively obsequious and brown-nosing. The company had to issue a retraction and quasi-apology, linked above.
I mention this recent news because it both refutes and confirms Ben Schneiderman's 2018 assessment of the AI field. He asserted that "The goal of good design is not to build machines that mimic or replace humans, but to build machines that are tools that empower people." The recently retracted, "sycophantish" model certainly did "empower" its users — to a fault! News articles about the controversy include anecdotes of the chatbot validating users' spiritual delusions and cockamamie get-rich-quick schemes. Thus, Schneiderman's second statement, that "the first stage of the decoupling of design of technology from the mimicry game has begun to happen," seems somewhat more dubious. Perhaps it's an unfortunate fact of human psychology that we associate empowerment with reflexive affirmation. But I predict the news in this area will only continue to get stranger and darker until the architects behind this technology really grapple with that fact.