Menu

Search

  |   Business

Menu

  |   Business

Search

Pentagon Labels Anthropic AI a Supply-Chain Risk, Restricting Use in U.S. Military Projects

Pentagon Labels Anthropic AI a Supply-Chain Risk, Restricting Use in U.S. Military Projects.

The U.S. Pentagon has officially designated artificial intelligence company Anthropic as a “supply-chain risk,” a move that immediately restricts government contractors from using the company’s AI technology in projects tied to the U.S. military. The decision follows months of tension between the Defense Department and Anthropic over limits the AI developer placed on how its technology can be used in military operations.

According to sources familiar with the situation, Anthropic’s flagship AI model, Claude, has been used to support various defense-related activities, including intelligence analysis and operational planning. One source indicated the technology may have been used in connection with military operations involving Iran, though the full details of its deployment remain unclear.

The new designation effectively prevents contractors working with the Pentagon from integrating Anthropic’s AI tools into defense systems or military workflows. While the government has not disclosed the full scope of the restriction, the “supply-chain risk” label is significant because it has historically been applied to foreign technology companies considered potential security threats. Similar actions were previously taken against Chinese telecommunications giant Huawei, which was removed from U.S. defense supply chains.

Anthropic had previously been among the earliest American AI firms to collaborate with U.S. national security agencies. However, the relationship has become strained due to disagreements over the ethical safeguards the company enforces on its AI systems. Anthropic has maintained strict policies prohibiting the use of its Claude AI for autonomous weapons development or large-scale surveillance within the United States.

The Pentagon has reportedly pushed back against these restrictions, arguing that military use of artificial intelligence should be permitted when it complies with existing U.S. laws and defense guidelines. This policy clash ultimately contributed to the recent decision to designate Anthropic as a supply-chain risk.

Anthropic’s AI technology is also embedded in defense-related platforms such as Palantir’s Maven Smart Systems, a software platform used by military organizations for intelligence processing and weapons targeting. The platform reportedly incorporates workflows and prompts built using Claude’s underlying code.

Neither Anthropic nor the Defense Department—referred to by the Trump administration as the Department of War—immediately responded to requests for comment regarding the designation. The move highlights growing tensions between AI companies and governments over how advanced artificial intelligence should be used in national security and warfare.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.