Anthropic rejects Pentagon demand to loosen AI safeguards over surveillance fears
2026-02-27 - 09:02
Artificial intelligence company Anthropic announced Thursday it will not comply with a US Defense Department request to relax safeguards on its AI systems, citing ethical concerns about mass surveillance and fully autonomous weapons. CEO Dario Amodei stated the company opposes allowing its Claude AI model to be used for "mass domestic surveillance" or "fully autonomous weapons," warning that current AI systems lack the reliability needed for such applications. Ethical Stance and National Security Amodei acknowledged that AI can support national security but cautioned that large-scale, AI-driven surveillance could threaten civil liberties. He emphasized that operating autonomous weapons without human oversight would require safeguards that "don't exist today." The statement comes after weeks of negotiations between Anthropic and the Pentagon, with the Trump administration signaling it may take coercive action. Government Pressure and Threats The administration has threatened to invoke the Defense Production Act, which would compel the company to prioritize national defense needs, and has considered labeling Anthropic a "supply chain risk"—a designation that would prevent Defense Department contractors from using its software. Axios reported that the Pentagon has initiated steps toward that designation and has asked Boeing and Lockheed Martin to document their reliance on Claude. Pentagon Response Pentagon spokesperson Sean Parnell denied the department intends to use AI for unlawful surveillance or fully autonomous weapons without human involvement. In a post on X, he stated the Pentagon seeks to use Anthropic's model for "all lawful purposes" and asserted that no private company should dictate operational decisions. The confrontation highlights growing tensions between the defense establishment and tech companies over the ethical boundaries of artificial intelligence in military applications.