(Bloomberg) — The Pentagon said it has formally notified Anthropic PBC that it’s determined the company and its products pose a risk to the US supply chain, according to a senior defense official, escalating a dispute over artificial intelligence safeguards.
“DOW officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately,” the official told Bloomberg News on Thursday, using an acronym for the Department of War, the name that Defense Secretary Pete Hegseth now favors for the Department of Defense.
Spokespeople for Anthropic had no immediate comment. The defense official didn’t say when or by what means the Pentagon informed the company.
Anthropic has previously vowed to challenge in court any supply-chain risk designation by the Pentagon.
The Pentagon’s finding threatens to disrupt both the company and the military, which has relied heavily on Anthropic’s software. Until recently, Anthropic provided the only AI system that could operate in the Pentagon’s classified cloud. Its Claude Gov tool has become a favored option among defense personnel for its ease of use.
“It’s a good capability” and removing it is “going to be painful for all involved,” said Lauren Kahn, a senior research analyst at Georgetown University’s Center for Security and Emerging Technology.
Anthropic Chief Executive Officer Dario Amodei had been negotiating for weeks with Emil Michael, under-secretary of defense for research and engineering, to hammer out a contract governing the Pentagon’s access to Anthropic’s technology.
But talks broke down last week after the startup demanded assurances that its AI wouldn’t be used for mass surveillance of Americans or autonomous weapons deployment. Hegseth then declared Friday in a post on X that Anthropic posed a supply-chain risk, a designation typically reserved for US adversaries.
It wasn’t immediately clear what authority the Pentagon was using to classify the company as a supply-chain threat. In its statement last week responding to Hegseth’s social-media post, Anthropic indicated that it expected the move to be eventually carried out via section 3252 of the law governing the US armed forces.
“From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes,” the defense official said Thursday. “The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.”
The move comes as the US military is relying on Claude in its Iran campaign, where American armed forces are turning to a range of AI tools to quickly manage enormous amounts of data for their operations.
Maven Smart System, produced by Palantir Technologies Inc. and widely used by military operators in the Middle East, counts Anthropic’s Claude AI tool among the large language models installed on the system, according to people familiar with the matter, who said Claude is working well and has become central to US operations against Iran and to accelerating Maven’s AI efforts.
Now valued at $380 billion, Anthropic is on track to generate annual revenue of almost $20 billion, a projection based on current performance, more than doubling its run rate from late last year. The Pentagon dispute, however, has muddied the outlook for the company.
Any long-term impact from the Pentagon’s declaration on Anthropic’s sales to enterprise customers – which has long been its core business – remains to be seen. In the meantime, it’s gaining traction with everyday users. Anthropic’s main app recently topped Apple Inc.’s download charts, reflecting a surge of support for the company.
–With assistance from Maggie Eastland, Jen Judson and Shirin Ghaffary.
(Updates with new headline, more detail starting in fourth paragraph)
More stories like this are available on bloomberg.com