The U.S. military has formally designated AI firm Anthropic a supply chain risk, the company announced Thursday — a move that could cut it off from military-related contracts. Anthropic is the only AI company deployed on the Pentagon’s classified networks.
The designation stems from a dispute between the Trump administration and Anthropic over guardrails the company seeks to impose on its Claude model. Anthropic wants explicit written prohibitions preventing the U.S. military from using Claude for mass surveillance of Americans or to power fully autonomous weapons. The Pentagon says it must be able to use Claude for “all lawful purposes” and maintains the uses Anthropic worries about are already illegal or restricted by Defense Department policy.
Defense Secretary Pete Hegseth said last week Anthropic would be cut off from government contracts and designated a supply chain risk, but the company only received formal notification this week, a senior Pentagon official confirmed to CBS News. Hegseth said the military will phase out Anthropic over six months; the official said the formal designation did not include a specific offboarding timeline.
The military used Claude in strikes on Iran that began last weekend, two sources previously told CBS News, though it is not clear exactly how the model was deployed.
Anthropic CEO Dario Amodei said the company does not believe the designation is legally sound and plans to challenge it in court. He added that most customers will be unaffected, arguing the designation should only impact uses of Claude directly tied to Defense Department contracts and does not prevent military contractors from using the technology for nonmilitary work.
Anthropic received the supply chain risk designation after Amodei said he was still in talks with the Pentagon “to try to deescalate the situation.” He told investors this week and spoke at a Morgan Stanley conference expressing that the sides “have much more in common than we have differences.” In a CBS News interview, Amodei said he wants to work with the military to protect U.S. national security but reiterated the company’s two red lines: no mass domestic surveillance and no fully autonomous weapons. He argued AI could give the government surveillance powers “contrary to American values,” and that AI is not precise enough to safely support weapons that select targets without human input.
The Pentagon’s position is that mass surveillance of Americans is illegal and that fully autonomous weapons are already constrained by internal policy, so written restrictions are unnecessary. Emil Michael, the Pentagon’s chief technology officer, told CBS News last week that while “at some level, you have to trust your military to do the right thing,” the department cannot agree to be unable to defend itself “in writing to a company.” The Pentagon offered a compromise to acknowledge in writing the laws and policies that restrict surveillance and autonomous weapons; Anthropic called the offer inadequate, saying legal wording could allow the military to disregard the guardrails.
Relations between the administration and Anthropic grew heated with public criticism on both sides. Hegseth called Anthropic “sanctimonious,” Michael accused Amodei of having a “God-complex,” and former President Trump labeled the company “radical left” and “woke.” The administration set a deadline for Anthropic to consent to military use of Claude for “all lawful purposes.” With no agreement reached, Trump ordered federal agencies to immediately stop using Claude, while giving the Defense Department up to six months to phase the technology out. Rival OpenAI announced it had reached a deal with the military.
A senior Pentagon official said the dispute is “about one fundamental principle: the military being able to use technology for all lawful purposes,” and that the military will not allow a vendor to “insert itself into the chain of command by restricting the lawful use of a critical capability.” Amodei called the administration’s action “retaliatory and punitive,” and in response to Trump said Anthropic’s actions were taken “for the sake of this country” and U.S. national security. “Disagreeing with the government is the most American thing in the world,” he added, calling Anthropic’s stance patriotic.