The U.S. Pentagon has officially designated artificial intelligence company Anthropic as a “supply-chain risk,” a move that immediately restricts government contractors from using the company’s AI technology in projects tied to the U.S. military. The decision follows months of tension between the Defense Department and Anthropic over limits the AI developer placed on how its technology can be used in military operations.
According to sources familiar with the situation, Anthropic’s flagship AI model, Claude, has been used to support various defense-related activities, including intelligence analysis and operational planning. One source indicated the technology may have been used in connection with military operations involving Iran, though the full details of its deployment remain unclear.
The new designation effectively prevents contractors working with the Pentagon from integrating Anthropic’s AI tools into defense systems or military workflows. While the government has not disclosed the full scope of the restriction, the “supply-chain risk” label is significant because it has historically been applied to foreign technology companies considered potential security threats. Similar actions were previously taken against Chinese telecommunications giant Huawei, which was removed from U.S. defense supply chains.
Anthropic had previously been among the earliest American AI firms to collaborate with U.S. national security agencies. However, the relationship has become strained due to disagreements over the ethical safeguards the company enforces on its AI systems. Anthropic has maintained strict policies prohibiting the use of its Claude AI for autonomous weapons development or large-scale surveillance within the United States.
The Pentagon has reportedly pushed back against these restrictions, arguing that military use of artificial intelligence should be permitted when it complies with existing U.S. laws and defense guidelines. This policy clash ultimately contributed to the recent decision to designate Anthropic as a supply-chain risk.
Anthropic’s AI technology is also embedded in defense-related platforms such as Palantir’s Maven Smart Systems, a software platform used by military organizations for intelligence processing and weapons targeting. The platform reportedly incorporates workflows and prompts built using Claude’s underlying code.
Neither Anthropic nor the Defense Department—referred to by the Trump administration as the Department of War—immediately responded to requests for comment regarding the designation. The move highlights growing tensions between AI companies and governments over how advanced artificial intelligence should be used in national security and warfare.


Rubio Directs U.S. Diplomats to Use X and Military Psyops to Counter Foreign Propaganda
Jefferies Upgrades Sodexo to Buy With €55 Target After Historic CEO Appointment
Reflection AI Eyes $25 Billion Valuation in Massive $2.5 Billion Funding Round
McDonald's and Restaurant Brands International Face Headwinds Amid Iran Conflict and Rising Costs
TSMC Japan's Second Fab to Produce 3nm Chips by 2028
Trump Warns of Iran Strikes as Nuclear Deal Talks Intensify
WTO Ministerial Collapse Leaves Global Digital Trade Rules in Limbo
Unilever and Magnum Face Defamation Lawsuit Over Ben & Jerry's Board Chair Dismissal
Elliott Investment Management Takes Multibillion-Dollar Stake in Synopsys
U.S. Senators Challenge FCC Chair Over Nexstar-Tegna Merger Approval
Israel Passes Death Penalty Law Targeting Palestinians in Military Courts
Norma Group Posts Revenue Decline in 2025, Eyes Modest Recovery in 2026
SMIC Allegedly Supplies Chipmaking Tools to Iran's Military, U.S. Officials Warn
Iran Strikes Oil Tanker Near Dubai Amid U.S. Threats and Ongoing Middle East Conflict
SoftwareONE Posts 22.5% Revenue Surge in 2025 on Crayon Acquisition
Nanya Technology Shares Surge 10% After $2.5 Billion Private Placement from Sandisk and Cisco
Canada's Arctic Military Expansion Sparks Hope and Concern Among Indigenous Communities 



