Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.
August 14, 2025

(New York Times) – Over 21 days of talking with ChatGPT, an otherwise perfectly sane man became convinced that he was a real-life superhero. We analyzed the conversation.
We wanted to understand how these chatbots can lead ordinarily rational people to believe so powerfully in false ideas. So we asked Mr. Brooks to send us his entire ChatGPT conversation history. He had written 90,000 words, a novel’s worth; ChatGPT’s responses exceeded one million words, weaving a spell that left him dizzy with possibility.
We analyzed the more than 3,000-page transcript and sent parts of it, with Mr. Brooks’s permission, to experts in artificial intelligence and human behavior and to OpenAI, which makes ChatGPT. An OpenAI spokeswoman said the company was “focused on getting scenarios like role play right” and was “investing in improving model behavior over time, guided by research, real-world use and mental health experts.” On Monday, OpenAI announced that it was making changes to ChatGPT to “better detect signs of mental or emotional distress.” (Read More)