Advertisement
Categories: NewsTechWorld

U.S. Military Ordered to Remove Anthropic AI from Systems

Advertisement

The Pentagon has just issued an urgent directive to all U.S. military leaders demanding they remove Anthropic’s artificial intelligence (AI) products from their systems within 180 days, according to internal memos obtained by CBS News. This decision comes after the Defense Department formally labeled Anthropic as a supply chain risk. The memo was released on March 6, the same day the Pentagon formally designated Anthropic at risk.

The document, signed by Chief Information Officer Kirsten Davies, details that Anthropic’s AI “presents an unacceptable supply chain risk for use in all Department of Defense (DoD) systems and networks.” It outlines stringent steps military commanders must take to eliminate Anthropic’s products from sensitive areas including nuclear weapons, ballistic missile defense, cyber warfare, and more.

The notice mandates that any company engaging with the Pentagon must cease using all Anthropic products on work related to DoD contracts within 180 days. A senior Pentagon official confirmed the memo’s authenticity.

This move marks a significant escalation in a dispute between the Trump administration and Anthropic, which has previously sought two “red lines” to prevent U.S. military use of its AI models for mass surveillance or autonomy. Anthropic CEO Dario Amodei contends that crossing these red lines would violate American values. Anthropic’s Claude model is currently used by the US military in operations against Iran.

Anthropic responded with a lawsuit, claiming the Pentagon’s decision amounted to illegal retaliation against the company’s protected speech under the Constitution. The White House countered by stating it will not allow such restrictions on national security operations.

One of Anthropic’s largest competitors, OpenAI, has recently signed a deal with the Pentagon, indicating this move may be an isolated incident or could be indicative of broader changes in AI use within defense sectors. Anthropic’s AI is currently deployed on classified DoD systems, and ongoing talks between both parties broke down last month.

Anthropic’s main application for Claude involves analyzing vast intelligence reports—synthesizing patterns, summarizing findings, and quickly identifying pertinent information. This capability has reportedly enhanced the military’s strike efficiency, with a striking rate of 70% on over a thousand potential targets processed daily. Human analysts are still involved in the process, but AI significantly speeds up analysis times.

In this climate, Anthropic’s struggle to navigate between safeguarding values and maintaining national security presents complex challenges for both companies and policymakers alike.

Advertisement
News Desk

Recent Posts

Harry Wants William to Take First Step for Relationship Reconciliation

Prince Harry continues to harbor deep-seated frustrations with his elder brother William, as their relationship…

41 minutes ago

ICDBMD Discusses Land Purchase for Dams

The Implementation Committee on Diamer-Bhasha and Mohmand Dams (ICDBMD) convened a meeting on Tuesday in…

47 minutes ago

White House informed of Gulf countries’ missile interceptor shortage concerns

The White House is aware that Gulf nations are struggling with dwindling missile interceptors, forcing…

2 hours ago

Punjab Schools Department Makes Large-Scale Officer Transfers

The Punjab School Education Department made a substantial restructuring across the province on Tuesday, announcing…

3 hours ago

Queen Camilla Ditches Royal Protocol for Commonwealth Day Appearance

At the annual Commonwealth Day service held at Westminster Abbey on March 9, Queen Camilla…

3 hours ago

Doctor Suicides After Wife Dispute in North Nazimabad

A dispute within a family led to tragic consequences in North Nazimabad, as a 36-year-old…

3 hours ago