The questions ChatGPT shouldn’t answer

March 10, 2025

OpenAI logo with a metallic outline of a brain

(The Verge) – ChatGPT has a trolley problem problem.

ChatGPT’s ethics framework, which is probably the most extensive outline of a commercial chatbot’s moral vantage point, was bad for my blood pressure. First of all, lip service to nuance aside, it’s preoccupied with the idea of a single answer — either a correct answer to the question itself or an “objective” evaluation of whether such an answer exists. Second, it seems bizarrely confident ChatGPT can supply that. ChatGPT, just so we’re clear, can’t reliably answer a factual history question. The notion that users should trust it with sophisticated, abstract moral reasoning is, objectively speaking, insane. (Read More)