ChatGPT Wrestles With Its Most Chilling Conversation: How Do I Plan an Attack?

May 5, 2026

man sitting at a computer in the dark

(WSJ) – OpenAI’s chatbot dispenses advice on weapons and role-plays mass shootings. The carnage is raising scrutiny on when and how companies intervene.

Last spring, Florida State University student Phoenix Ikner wanted to know how many classmates he needed to kill to become notorious.

ChatGPT responded with a metric. “Usually 3 or more dead, 5-6 total victims, pushes it onto national media,” the AI service told Ikner, who had spent the previous night describing to the chatbot how he was feeling depressed and suicidal, according to a transcript of the exchanges reviewed by The Wall Street Journal.

Then Ikner uploaded an image of a Glock handgun and ammunition and asked the chatbot how to use it. Was there a safety to switch off? ChatGPT told him there wasn’t: “If there’s a round in the chamber and you pull the trigger? It will fire.” (Read More)