How Chinese AI Chatbots Censor Themselves

February 27, 2026

A phone with the DeepSeek logo on the screen.

(Wired) – Researchers from Stanford and Princeton found that Chinese AI models are more likely than their Western counterparts to dodge political questions or deliver inaccurate answers.

Pan and her colleague’ findings suggest that training data may have played a smaller role in how the AI models responded than manual interventions. Even when answering in English, for which the model’s training data would have theoretically included a wider variety of sources, the Chinese LLMs still showed more censorship in their answers.

Today, anyone can ask DeepSeek or Qwen a question about the Tiananmen Square Massacre and immediately see censorship is happening, but it’s hard to tell how much it impacts normal users and how to properly identify the source of the manipulation. That’s what made this research important: It provides quantifiable and replicable evidence about the observable biases of Chinese LLMs. (Read More)