LLMs’ “simulated reasoning” abilities are a “brittle mirage,” researchers find
August 11, 2025

(Ars Technica) – In recent months, the AI industry has started moving toward so-called simulated reasoning models that use a “chain of thought” process to work through tricky problems in multiple logical steps. At the same time, recent research has cast doubt on whether those models have even a basic understanding of general logical concepts or an accurate grasp of their own “thought process.” Similar research shows that these “reasoning” models can often produce incoherent, logically unsound answers when questions include irrelevant clauses or deviate even slightly from common templates found in their training data.
In a recent pre-print paper, researchers from the University of Arizona summarize this existing work as “suggest[ing] that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text.” (Read More)