US Military Used Claude AI in Iran Strikes Hours After Trump's Ban

US Military Used Claude AI in Iran Strikes Hours After Trump's Ban | Quick Digest
The US military deployed Anthropic's Claude AI in Iran airstrikes for intelligence and target identification, reportedly just hours after President Donald Trump ordered a government-wide ban on the company's technology due to a dispute over military AI safeguards. This controversial move highlights a growing conflict between tech ethics and national defense needs.

Key Highlights

  • President Trump ordered a ban on Anthropic AI for federal agencies.
  • US military used Anthropic's Claude AI in Iran strikes for intelligence assessments.
  • The use occurred reportedly hours after Trump's directive.
  • The dispute stemmed from Anthropic's refusal to grant unrestricted AI access to the Pentagon.
  • Defense Secretary designated Anthropic a 'supply chain risk'.
  • AI was also previously used in the capture of Venezuela's Nicolás Maduro.
In a significant and rapidly unfolding development, the United States military reportedly utilized artificial intelligence tools from Anthropic, specifically its Claude AI, during airstrikes on Iran. This deployment occurred mere hours after President Donald Trump issued a directive on Friday, February 27, 2026, ordering all federal agencies to immediately cease using technology from the AI startup Anthropic. The Mint report, published on March 1, 2026, details a striking contradiction between presidential policy and military operational reality. According to a Wall Street Journal report cited by Mint and other sources, commands worldwide, including the U.S. Central Command in the Middle East, leveraged Anthropic's Claude AI for critical tasks such as intelligence assessments, target identification, and simulating battle scenarios during the Iran attack. President Trump's stringent order came amidst an escalating standoff between his administration and Anthropic. The core of the dispute revolved around Anthropic's refusal to grant the Pentagon unrestricted access to its Claude AI model for 'all lawful purposes'. Anthropic, a company founded by former OpenAI employees with a strong focus on AI safety, insisted on maintaining ethical guardrails. Specifically, the company objected to its AI systems being used for mass domestic surveillance of American citizens and for fully autonomous weapons systems that operate without human involvement in final targeting decisions. In a post on Truth Social, President Trump vehemently criticized Anthropic, labeling the company as 'leftwing nut jobs' and 'woke.' He asserted that their 'selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.' His directive called for an immediate cessation of Anthropic technology use across all federal agencies, though he allowed for a six-month phase-out period for departments like the Department of War that were already reliant on its products. Further escalating the pressure, Defense Secretary Pete Hegseth formally designated Anthropic as a 'supply chain risk to national security' shortly after Trump's announcement. This designation is typically reserved for foreign adversaries, such as China's Huawei, and effectively bars any contractor, supplier, or partner doing business with the U.S. military from conducting commercial activity with Anthropic. Hegseth explicitly stated that America's warfighters would 'never be held hostage by the ideological whims of Big Tech.' Despite these directives, the military's continued reliance on Claude AI was evident in the Iran strikes. This highlights the deep integration of Anthropic's technology within U.S. defense and intelligence operations, with Claude having been the first frontier AI model deployed in the U.S. government's classified networks since June 2024. It has been actively used by agencies like the CIA and NSA for intelligence analysis, operational planning, and cyber operations. The reports also indicate that the use of Claude in high-profile missions, such as the Iran strikes, is among the reasons the U.S. administration acknowledged the necessity of a six-month phase-out period for the technology. Prior to the Iran operation, Anthropic's AI had also been utilized by the Pentagon during the operation to capture former Venezuelan president Nicolás Maduro, underscoring its established role in sensitive military missions. The strikes on Iran themselves were a major geopolitical event. One report indicates that Iran's Supreme Leader Ayatollah Ali Khamenei was killed in these strikes, with state media confirming his death shortly after Trump's announcement. This would mark a significant escalation in US-Iran tensions and a profound shift in regional power dynamics. Trump reportedly hailed the strike as an act of justice, while Israel's Prime Minister Benjamin Netanyahu called for the Iranian people to rise up. Iran, in response, declared a mourning period and vowed retaliation. This entire episode raises critical questions about the ethics of AI in warfare, the balance between technological innovation and national security concerns, and the autonomy of military operations versus political directives. The swift transition from a presidential ban to operational use demonstrates the complex and often contradictory realities of integrating advanced AI into sensitive national defense frameworks, particularly when ethical considerations clash with perceived strategic necessities. The ongoing dispute also sets a precedent for how governments will manage relationships with private AI developers whose technologies become indispensable for national security, yet who seek to impose ethical boundaries on their use.

Frequently Asked Questions

What was Donald Trump's directive regarding Anthropic AI?

On February 27, 2026, President Donald Trump ordered all US federal agencies to immediately cease using technology from Anthropic. This directive, issued via Truth Social, stemmed from a dispute over Anthropic's refusal to grant the Pentagon unrestricted access to its Claude AI model for military applications.

How was Anthropic's Claude AI used by the US military in Iran?

Despite Trump's directive, the US military, specifically U.S. Central Command, reportedly used Anthropic's Claude AI during airstrikes on Iran. The AI was employed for critical tasks such as intelligence assessments, target identification, and simulating battle scenarios.

What was the reason for the conflict between Anthropic and the US government?

The conflict arose because Anthropic refused to allow the Pentagon unrestricted use of its Claude AI, particularly objecting to its deployment for mass domestic surveillance of Americans or for fully autonomous weapons systems. Anthropic sought to maintain ethical safeguards on its technology.

What are the broader implications of AI use in military operations?

The incident highlights profound implications for AI in warfare, including ethical concerns regarding autonomous weapons and surveillance, the autonomy of military operations versus political oversight, and the complex relationship between governments and private tech companies whose innovations are crucial for national security.

What is the significance of the 'hours later' timeline in this news?

The 'hours later' timeline is highly significant as it underscores an immediate and public contradiction between a presidential order and actual military actions. It reveals the deep integration of Anthropic's AI into defense systems and the practical challenges of implementing a sudden 'ban' on critical technology.

Read Full Story on Quick Digest