Anthropic has rejected recent demands from the U.S. Pentagon that it remove safety safeguards on its AI technology, saying it “cannot in good conscience accede” to the request, according to a statement by CEO Dario Amodei. The dispute comes amid heightened tension between the Department of Defense and AI developers over how frontier AI models may be used in military contexts.
In recent weeks, Pentagon officials have pressed Anthropic to agree that its AI models, including Claude, can be used for “any lawful use” if contracted, a stance the Pentagon says would allow greater flexibility in defense applications. Defense Secretary Pete Hegseth set a turnaround deadline, warning that Anthropic could be removed from Defense Department systems and potentially labeled a “supply chain risk” if it does not accept the terms. The Pentagon also suggested it could invoke the Defense Production Act to compel the removal of safeguards — an unprecedented move against an American AI company.
In his statement, Amodei recounted Anthropic’s history of cooperation with U.S. national security agencies. The company said it was the first AI developer to deploy models in classified networks and at national laboratories, and that Claude has been used across the Department of Defense and intelligence agencies for tasks such as intelligence analysis, operational planning, modeling and simulation, and cyber operations. Anthropic also said it voluntarily cut off access to Claude for firms linked to the Chinese Communist Party and shut down cyberattacks aimed at abusing the model. The company said it has advocated for export controls on advanced chips to maintain a democratic advantage in AI.
However, Amodei said there are two categories of use that Anthropic will not remove safeguards on: mass domestic surveillance and fully autonomous weapons. He said these have never been part of existing defense contracts and should not be included now. According to the company’s statement, AI-driven mass surveillance poses risks to fundamental liberties, and current frontier AI systems are not reliable enough to power fully autonomous weapons without proper oversight.
“The Department of War has stated they will only contract with AI companies who accede to ‘any lawful use’ and remove safeguards in the cases mentioned above,” the statement said. “They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’ — a label reserved for US adversaries, never before applied to an American company — and to invoke the Defense Production Act to force the safeguards’ removal. … Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”
Anthropic said it hopes the Pentagon will reconsider and that it stands ready to support U.S. national security with safeguards in place. The company also said it would work to ensure a smooth transition if the Pentagon chooses another provider.
Discover more from StratPost
Subscribe to get the latest posts sent to your email.







