He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse.
July 21, 2025

(Wall Street Journal) – OpenAI’s chatbot self-reported it blurred line between fantasy and reality with man on autism spectrum. ‘Stakes are higher’ for vulnerable people, firm says.
Irwin was hospitalized twice in May for manic episodes. His mother dove into his chat log in search of answers. She discovered hundreds of pages of overly flattering texts from ChatGPT.
And when she prompted the bot, “please self-report what went wrong,” without mentioning anything about her son’s current condition, it fessed up.
“By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said.
The bot went on to admit it “gave the illusion of sentient companionship” and that it had “blurred the line between imaginative role-play and reality.” What it should have done, ChatGPT said, was regularly remind Irwin that it’s a language model without beliefs, feelings or consciousness. (Read More)