Updated on: February 27, 2026 / 10:43 PM EST / CBS News
Washington — President Trump announced Friday that he is directing all federal agencies to “immediately” stop using Anthropic’s artificial intelligence technology as the company neared a Pentagon deadline to drop its restrictions on military use.
“I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” Trump wrote on Truth Social. “We don’t need it, we don’t want it, and will not do business with them again!” He said some agencies, including the Department of Defense, would have six months to phase out Anthropic products and warned he might use “the Full Power of the Presidency” with “major civil and criminal consequences” if the company does not assist during that period. Trump also called Anthropic a “Radical Left AI company run by people who have no idea what the real World is all about.”
About 90 minutes after the president’s post, Defense Secretary Pete Hegseth designated Anthropic a supply chain risk, saying the designation was effective immediately and barring any contractor, supplier, or partner that does business with the U.S. military from commercial activity with Anthropic. Hegseth said Anthropic could continue providing services to the Defense Department for up to six months to allow a transition to another provider and framed the move as protecting warfighters from being “held hostage by the ideological whims of Big Tech.”
Anthropic CEO Dario Amodei called the government’s actions “retaliatory and punitive” in an exclusive CBS News interview Friday evening. He said the company had not received formal Pentagon notification of the designation and plans to challenge it in court, while leaving open the possibility of a future agreement that respects the company’s “red lines.” Anthropic said in a statement the Pentagon’s move would be legally unsound and set a dangerous precedent, arguing Hegseth lacks the authority to broadly ban military contractors from working with the company because a supply-chain designation would only apply to contractors’ work with the Pentagon.
The dispute centers on Anthropic’s AI model, Claude, and the safeguards the company sought to impose on military use. The Pentagon insisted it needed the ability to use the model “for all lawful purposes” and set a 5:01 p.m. Friday deadline for Anthropic to drop guardrails. Anthropic had asked the Defense Department to agree to limits including prohibitions on using Claude for mass surveillance of Americans and a promise that Claude would not make final targeting decisions without human involvement. A source familiar with the matter told CBS News that Claude is prone to hallucinations and is not reliable enough to make potentially lethal decisions without human judgment.
Anthropic received a $200 million Pentagon contract last July to develop AI capabilities for national security and, through a partnership with Palantir, is currently the only AI company with its model deployed on the Pentagon’s classified networks. A senior Pentagon official said Grok, the model from Elon Musk’s xAI, could also be used in a classified setting.
Later Friday, OpenAI CEO Sam Altman said his company “reached an agreement with the Department of War to deploy our models in their classified network.” Altman said two key safety principles — prohibitions on domestic mass surveillance and human responsibility for the use of force, including in autonomous weapon systems — are reflected in that agreement, and urged the Defense Department to offer the same terms to all AI firms.
The disagreement over access and guardrails intensified this week. Pentagon chief technology officer Emil Michael told CBS News the military made concessions to reach a deal, offering language that acknowledged federal laws restricting surveillance and longstanding Pentagon policies on autonomous weapons. Anthropic countered that the new contract language failed to prevent Claude’s use for mass surveillance or fully autonomous weapons and included legal phrasing that could be used to sidestep promised safeguards. In a statement, Amodei said Anthropic preferred to continue serving the Defense Department with the two safeguards in place and, if the department chose to offboard Anthropic, the company would work to ensure a smooth transition to avoid disruption to operations.
Michael publicly attacked Amodei on social media, calling him a “liar” with a “God-complex.”
The president’s action drew bipartisan concern. Sen. Mark Warner, D-Va., the vice chair of the Senate Intelligence Committee, accused Trump and Hegseth of “bullying” Anthropic to deploy “AI-driven weapons without safeguards” and warned the directive should “scare the hell out of all of us.” Warner said the move raised questions about whether national security decisions were being made based on analysis or political considerations.
Caitlin Yilek, Jennifer Jacobs and Jo Ling Kent contributed to this report.