March 24, 2026 / 7:46 PM EDT / CBS News
A judge sharply questioned a federal lawyer about the Pentagon’s efforts to exclude Anthropic from classified systems, the latest chapter in the company’s dispute with the Trump administration over AI guardrails.
At issue is Anthropic’s push to bar the military from using its Claude model for mass surveillance of Americans or for fully autonomous weapons. The administration says it must be able to use Claude for “all lawful purposes.” After the parties failed to agree, the Pentagon designated Anthropic a “supply chain risk” and moved to prevent private companies from using Claude on military contracts, prompting Anthropic to sue.
Anthropic argues the designation was an unconstitutional punishment for its speech and is asking the court to block both the supply chain risk label and President Trump’s order banning federal agencies from using Anthropic products.
In a hearing Tuesday in San Francisco, U.S. District Judge Rita Lin expressed skepticism about the government’s actions, calling them “troubling” and saying they “don’t really seem to be tailored to the stated national security concern.” Lin noted that if the worry were the integrity of the operational chain of command, the Defense Department (which the administration calls the Department of War, or DOW) could simply stop using Claude. “It looks like defendants went further than that because they were trying to punish Anthropic,” she said. “One of the amicus briefs used the term ‘attempted corporate murder.’ I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic.”
Justice Department attorney Eric Hamilton conceded the supply chain risk designation does not bar companies that contract with the military from using Anthropic’s model on non-military commercial work. Defense Secretary Pete Hegseth had posted that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” Hamilton confirmed the Defense Department will not terminate federal contractors for having separate commercial relationships with Anthropic and said he was unaware of any law granting the department such authority. Anthropic’s attorney Michael Mongan said Hegseth’s widely viewed post nonetheless created “profound uncertainty” and harmed the company.
The statute used to justify the designation defines a supply chain risk as a threat that an adversary may sabotage, introduce unwanted functions, or otherwise subvert a national security system. Hamilton told the court the government labeled Anthropic a supply chain risk because negotiations and discussions with military officials left the Pentagon unable to trust the company and raised fears of “risk of future sabotage.” He suggested concerns the company might manipulate software or install a “kill switch.”
Lin pressed back, questioning whether the government was effectively claiming a company could be designated a supply chain risk for being “stubborn” or “asking annoying questions.” Mongan denied Anthropic has the ability to alter, disable, surveil, or otherwise influence its deployed software and argued that a true saboteur would accept government contract terms rather than engage in a public dispute. He also said it was inconsistent for the government to have been willing to negotiate with Anthropic until the end if the company posed a grave risk.
Lin said she will focus her ruling on whether the government’s labeling of Anthropic as a supply chain risk was lawful and plans to issue a decision in the coming days.
The dispute underscores broader tensions over acceptable military uses of AI and how much control private companies should have over their technology’s use. Anthropic was the only AI firm whose technology was deployed in classified U.S. military systems. CEO Dario Amodei has said he wants to work with the military but insists on two “red lines”: banning mass surveillance of Americans and prohibiting fully autonomous weapons that can strike without human input. He has argued AI surveillance capabilities are outpacing legal protections and that reliability for autonomous weapons is not yet sufficient.
The Pentagon says it has no interest in mass surveillance or fully autonomous weapons using Anthropic’s tech, and maintains those uses are already illegal or banned under existing policy. The military contends decisions about lawful uses of AI should not be dictated by private companies and has accused Anthropic of trying to impose its values on the government. Pentagon Chief Technology Officer Emil Michael previously criticized Amodei on social media, saying he has a “God-complex” and seeks to control the U.S. military. Lin described that dispute as “a fascinating public policy debate” but said the court’s focus is the legality of the supply chain risk designation.