The Pentagon has just issued an urgent directive to all U.S. military leaders demanding they remove Anthropic’s artificial intelligence (AI) products from their systems within 180 days, according to internal memos obtained by CBS News. This decision comes after the Defense Department formally labeled Anthropic as a supply chain risk. The memo was released on March 6, the same day the Pentagon formally designated Anthropic at risk.
The document, signed by Chief Information Officer Kirsten Davies, details that Anthropic’s AI “presents an unacceptable supply chain risk for use in all Department of Defense (DoD) systems and networks.” It outlines stringent steps military commanders must take to eliminate Anthropic’s products from sensitive areas including nuclear weapons, ballistic missile defense, cyber warfare, and more.
The notice mandates that any company engaging with the Pentagon must cease using all Anthropic products on work related to DoD contracts within 180 days. A senior Pentagon official confirmed the memo’s authenticity.
This move marks a significant escalation in a dispute between the Trump administration and Anthropic, which has previously sought two “red lines” to prevent U.S. military use of its AI models for mass surveillance or autonomy. Anthropic CEO Dario Amodei contends that crossing these red lines would violate American values. Anthropic’s Claude model is currently used by the US military in operations against Iran.
Anthropic responded with a lawsuit, claiming the Pentagon’s decision amounted to illegal retaliation against the company’s protected speech under the Constitution. The White House countered by stating it will not allow such restrictions on national security operations.
One of Anthropic’s largest competitors, OpenAI, has recently signed a deal with the Pentagon, indicating this move may be an isolated incident or could be indicative of broader changes in AI use within defense sectors. Anthropic’s AI is currently deployed on classified DoD systems, and ongoing talks between both parties broke down last month.
Anthropic’s main application for Claude involves analyzing vast intelligence reports—synthesizing patterns, summarizing findings, and quickly identifying pertinent information. This capability has reportedly enhanced the military’s strike efficiency, with a striking rate of 70% on over a thousand potential targets processed daily. Human analysts are still involved in the process, but AI significantly speeds up analysis times.
In this climate, Anthropic’s struggle to navigate between safeguarding values and maintaining national security presents complex challenges for both companies and policymakers alike.


