The U.S. military’s use of Anthropic’s Claude AI during a military operation against Iran has sparked controversy, especially since it occurred shortly after President Trump labeled the AI a national security risk. This decision has drawn significant attention due to the operational context and timing, raising questions about the balance between technology use and national security policies.
Operation Epic Fury Details
On February 28, 2026, American and Israeli forces commenced Operation Epic Fury, targeting critical Iranian infrastructures. This included strategic military and nuclear sites, with the operation reportedly supported by Anthropic’s Claude AI. The AI was utilized for intelligence gathering, target identification, and conducting simulations, according to sources such as The Wall Street Journal and Axios.
This action took place only hours after the Trump administration officially categorized Anthropic as a security risk, intending to sever its defense affiliations promptly. Despite this, defense officials explained that disengaging from Claude was not feasible immediately, given its integral role within certain classified networks.
Anthropic’s Restrictions and Pentagon Pushback
The deployment of Claude AI in military operations has been contentious due to Anthropic’s strict usage policies. These guidelines prohibit deploying Claude for autonomous weapons and mass surveillance on U.S. soil, which are core to the Pentagon’s operational strategies. This stance has led to friction between Anthropic and U.S. defense authorities, particularly following an earlier operation in January 2026 in Venezuela.
Despite the Pentagon’s insistence on unrestricted usage, Anthropic CEO Dario Amodei reiterated the importance of maintaining these ethical boundaries. The Department of Defense responded by designating Anthropic as a supply-chain risk while allowing a six-month transition period to find alternatives, acknowledging the deep integration of Claude in current systems.
Future Implications and Industry Reactions
In response to the Pentagon’s actions, OpenAI announced an agreement with the U.S. Department of War to deploy its AI models under similar ethical conditions, emphasizing safety and operational integrity. OpenAI’s CEO, Sam Altman, highlighted the agreement’s focus on preventing the use of AI for domestic mass surveillance and autonomous weaponry.
Anthropic has vowed to contest the supply-chain risk label in court, arguing that its ethical policies are essential safety measures rather than optional guidelines. The debate highlights the broader industry challenge of balancing AI innovation with ethical considerations, particularly in defense applications.
As the situation develops, the Pentagon’s reliance on Anthropic’s Claude AI is expected to continue for several months. This case could set a significant precedent in the realm of AI ethics and military procurement, shaping the future of AI-enabled warfare and national security strategies.
