Anthropic has new rules for a more dangerous AI landscape

August 18, 2025

electricity between two panels

(The Verge) – The AI startup’s new policy now specifically bans using Claude to help develop biological, chemical, radiological, or nuclear weapons.

Anthropic has updated the usage policy for its Claude AI chatbot in response to growing concerns about safety. In addition to introducing stricter cybersecurity rules, Anthropic now specifies some of the most dangerous weapons that people should not develop using Claude. (Read More)