Anthropic has taken the extraordinary step of suing the United States Department of Defense, filing two separate legal actions on Monday after the Pentagon formally designated the San Francisco-based artificial intelligence start-up a 'supply chain risk' last Thursday.
The lawsuits, lodged in the northern district court of California and the US court of appeals for the Washington DC Circuit, allege that the government's decision was unlawful and violated Anthropic's First Amendment rights. The company has accused the Trump administration of 'seeking to destroy' its economic value, according to the Financial Times, in what has become one of the most consequential clashes yet between Washington and a leading AI developer.
The designation, which carries a demand that any company doing business with the federal government cut all ties with Anthropic, represents a serious commercial threat to the start-up. MarketWatch reported that the company says the move could cost it hundreds of millions of dollars in private deals. Anthropic had previously vowed to challenge the designation after the Pentagon formally issued it, but Monday's dual filings mark a significant hardening of its legal posture.
The dispute has its roots in a months-long feud over Anthropic's efforts to impose safeguards on the military's potential use of its AI models. The company has sought to prevent its technology from being deployed for mass domestic surveillance or for fully autonomous lethal weapons systems, positions that have brought it into direct conflict with Pentagon procurement priorities.
The supply chain risk designation is a rarely deployed instrument, and its application to a domestic US company is without precedent, according to the Guardian. The label has historically been used against foreign suppliers deemed to pose a national security threat, making its use here a notable departure that Anthropic argues is both procedurally and constitutionally flawed.
The outcome of the litigation is likely to have broad implications for the AI industry, setting parameters around how far the federal government can go in pressuring technology companies over the conditions they attach to use of their products. For Anthropic, which has staked part of its commercial identity on the responsible deployment of its Claude models, backing down was never a straightforward option. The lawsuits suggest it has concluded that the legal risk of inaction outweighs the cost of a prolonged confrontation with the federal government.

