Appeals Court Keeps Pentagon's Supply-Chain Risk Label on Anthropic—for Now

Image: Bbc
Main Takeaway
A DC appeals court refuses to pause the Pentagon’s unprecedented “supply-chain risk” tag on Anthropic, even as a California judge has blocked the ban from.
Summary
What the courts just decided
A federal appeals court in Washington, DC on Wednesday denied Anthropic’s emergency motion to lift the Pentagon’s “supply-chain risk” designation, letting the label remain in force while litigation continues. The three-judge panel ruled that Anthropic “has not satisfied the stringent requirements” for an immediate stay. That decision clashes with a March 26 order by U.S. District Judge Rita Lin in San Francisco, who called the move “Orwellian” and issued a nationwide injunction barring the government from cutting off Anthropic’s federal contracts. The dueling opinions leave the AI company in legal limbo: the label exists, but—at least for the moment—it cannot be enforced against the firm or its partners.
Why the Pentagon slapped the label on Anthropic
The Department of Defense notified Anthropic on March 3 that it was formally branded a supply-chain risk—the first time such a tag has been applied to a U.S. company, according to BBC and Politico. Officials cited the company’s refusal to grant unfettered military use of its Claude model, particularly Anthropic’s explicit ban on deploying Claude for mass surveillance or autonomous weapons. Defense Secretary Pete Hegseth escalated the dispute on social media, warning contractors to sever ties “effective immediately.” The Pentagon’s March letters also instructed prime contractors to scrub Anthropic software from their systems within 90 days, a step Mayer Brown notes could cascade through every tier of the defense supply chain.
The constitutional argument that won in California
Judge Lin found the government’s action likely violated Anthropic’s Fifth Amendment right to due process and First Amendment right to speak through its usage policies. In a 28-page ruling, she wrote that “nothing supports the Orwellian notion that disagreeing with the government turns a company into a national-security threat.” Courthouse News reports that the judge emphasized the lack of individualized findings against Anthropic and the absence of any procedure for the company to contest the label. Because the designation was issued without notice or hearing, Lin concluded the firm faced irreparable harm from the immediate freeze on federal work.
The DC circuit’s different calculus
The appeals panel did not reach the constitutional merits. Instead, it applied the traditional four-factor test for emergency stays and decided Anthropic failed to show it would suffer irreparable injury in the short term. Bloomberg notes the judges highlighted that the lower-court injunction already protects the company from immediate contract terminations, so the risk label itself does not presently inflict concrete harm. Wired adds the court gave the government seven additional days to appeal the California injunction, setting up a potential circuit split that could land the case at the Supreme Court.
Fallout for the defense tech ecosystem
With the label hanging over it, Anthropic’s cloud partners—Amazon Web Services and Google Cloud—must decide whether to treat Claude as restricted software. FedScoop reports that prime contractors such as Lockheed Martin and Raytheon have already begun auditing codebases for Anthropic models, even though Judge Lin’s order shields them from immediate termination. Smaller AI vendors now see a chilling effect: fearing similar retaliation, several startups have quietly removed usage-policy language that bars military applications, according to industry attorneys cited by Mayer Brown. Venture investors are also adding “Pentagon risk” clauses to term sheets, potentially steering capital away from safety-focused labs.
What happens next
The Justice Department has until mid-April to appeal the California injunction; if it does, the Ninth Circuit will weigh whether to keep Judge Lin’s order in place. Meanwhile, Anthropic’s broader constitutional challenge is scheduled for a district-court bench trial this summer. Legal scholars interviewed by NPR expect the case to clarify how much control the government has over private AI models that merely transit federal systems. A definitive ruling could set precedent for other frontier labs like OpenAI and Anthropic rival xAI, whose models also operate under usage policies that limit defense applications.
Broader stakes for national-security AI
At its core, the dispute is about who sets the rules for military use of general-purpose AI. The Pentagon argues that any restriction on its ability to deploy commercial models creates a strategic vulnerability vis-à-vis China. Anthropic counters that unchecked military deployment could accelerate arms races and erode democratic oversight. The standoff echoes Cold War-era battles over cryptography export controls, but with far larger commercial implications because today’s models power everything from logistics software to battlefield chatbots. How the courts resolve this tension will likely influence pending congressional efforts to create a federal AI licensing regime.
Key Points
The Pentagon labeled Anthropic a supply-chain risk on March 3—the first such tag for a U.S. firm—after Anthropic barred military use of Claude for surveillance or autonomous weapons.
A California federal judge blocked the ban on March 26, ruling the move likely unconstitutional; a DC appeals court on April 8 refused to lift the label itself.
The conflicting court orders leave the designation in legal limbo: it exists but cannot currently be enforced against federal contractors.
Prime contractors have begun auditing systems for Anthropic models, and smaller AI startups are quietly removing usage-policy restrictions to avoid similar retaliation.
The case could reach the Supreme Court and set precedent for government control over private AI model usage policies.
FAQs
It is a formal Pentagon tag that bars federal contractors and subcontractors from using a company’s products or services, effectively blacklisting the firm from the defense ecosystem.
Defense officials cited Anthropic’s refusal to remove contractual language that prohibits military use of Claude for mass surveillance or autonomous weapons systems.
Yes. A California judge’s nationwide injunction prevents enforcement of the ban, so contractors may continue using Claude while the litigation proceeds.
Federal contractors would have 90 days to remove Claude from their systems, and Anthropic could lose hundreds of millions in government-adjacent revenue streams.
No. The designation applies only to the defense supply chain; civilian and enterprise customers are unaffected.
Yes. If the Pentagon prevails, any lab with restrictive usage policies—such as OpenAI or Google—could be similarly targeted.
Source Reliability
47% of sources are highly trusted · Avg reliability: 80
Go deeper with Organic Intel
Our AI for Your Life systems give you practical, step-by-step guides based on stories like this.
Explore ai for your life systems