Updated on: February 26, 2026 / 11:36 PM EST / CBS News
The U.S. military’s partnership with AI firm Anthropic is on the brink of collapse as the company and a senior Pentagon official traded accusations ahead of a deadline to reach a deal.
The Pentagon has given Anthropic until Friday at 5:01 p.m. to allow the military to use the company’s Claude model for “all lawful purposes” or risk losing a $200 million Defense Department contract. Anthropic seeks explicit guardrails barring Claude from being used for mass surveillance of Americans or to conduct military operations autonomously.
Pentagon Chief Technology Officer Emil Michael told CBS News the department had “made some very good concessions” and offered to put in writing acknowledgments of federal laws that restrict military surveillance of Americans, longstanding Pentagon policies regarding autonomous weapons, and an invitation for Anthropic to join its AI ethics board. Michael said those uses are already barred by law and Pentagon policy and insisted the military does not use AI to power fully autonomous weapons.
“At some level, you have to trust your military to do the right thing,” Michael said, adding that the U.S. must also prepare for what adversaries such as China are doing and “we’ll never say that we’re not going to be able to defend ourselves in writing to a company.”
Anthropic responded that new contract language from the Pentagon “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons,” claiming the draft paired protective language with legalese that would allow safeguards to be disregarded. CEO Dario Amodei said the Pentagon’s threats to cut off contracts “do not change our position: we cannot in good conscience accede to their request” and expressed hope the department would reconsider.
Michael posted on X late Thursday calling Amodei a “liar” with a “God-complex,” accusing him of seeking personal control over U.S. military policy and risking national safety.
If no agreement is reached, the Pentagon plans to sever its partnership with Anthropic, designate the company a supply chain risk and is reportedly considering invoking the Defense Production Act to compel compliance. Michael did not confirm DPA use but said no vendor would remove software used by the department until alternatives are in place and that he is pursuing other AI partnerships.
Anthropic is currently the only AI company with its model deployed on the Pentagon’s classified networks through a partnership with Palantir, making the contract particularly consequential.
The dispute highlights a broader split among policymakers and tech firms over how to mitigate AI risks. Amodei has emphasized safety and transparency, arguing that “frontier AI systems are simply not reliable enough to power fully autonomous weapons” and that AI could stitch together scattered data into comprehensive surveillance dossiers. By contrast, the Trump administration and some defense officials warn that strict regulations could hinder innovation and weaken U.S. competitiveness, with Defense Secretary Pete Hegseth saying the military “will not employ AI models that won’t allow you to fight wars.”
Michael framed the disagreement as partly ideological, saying some fear the power of AI. He stressed the military’s intent to use AI lawfully and to treat it like other technologies, saying, “You can’t put the rules and the policies of the United States military and the government in the hands of one private company.”
