Defense Secretary Hegseth Summons Anthropic CEO Over Military Use of Claude AI
US Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon amid escalating concerns over the military’s deployment of Claude AI systems. The meeting follows allegations of unvetted AI use in operational planning, prompting threats of supply chain risk designation.

Defense Secretary Hegseth Summons Anthropic CEO Over Military Use of Claude AI
summarize3-Point Summary
- 1US Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon amid escalating concerns over the military’s deployment of Claude AI systems. The meeting follows allegations of unvetted AI use in operational planning, prompting threats of supply chain risk designation.
- 2According to Reuters, the summons follows internal Pentagon audits that uncovered Claude-based tools being used to draft battlefield briefings, analyze intelligence reports, and even assist in targeting algorithms — all without formal approval or security clearance protocols.
- 3The meeting, described by multiple anonymous defense officials as "unusually confrontational," comes amid growing bipartisan concern over the integration of commercial generative AI into national security infrastructure.
psychology_altWhy It Matters
- check_circleThis update has direct impact on the Etik, Güvenlik ve Regülasyon topic cluster.
- check_circleThis topic remains relevant for short-term AI monitoring.
- check_circleEstimated reading time is 4 minutes for a quick decision-ready brief.
Defense Secretary Hegseth Summons Anthropic CEO Over Military Use of Claude AI
US Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon for an urgent and tense meeting regarding the unauthorized use of the Claude large language model in sensitive military operations. According to Reuters, the summons follows internal Pentagon audits that uncovered Claude-based tools being used to draft battlefield briefings, analyze intelligence reports, and even assist in targeting algorithms — all without formal approval or security clearance protocols.
The meeting, described by multiple anonymous defense officials as "unusually confrontational," comes amid growing bipartisan concern over the integration of commercial generative AI into national security infrastructure. Hegseth reportedly warned Amodei that Anthropic could be designated a "supply chain risk" under the Defense Federal Acquisition Regulation Supplement (DFARS), potentially barring the company from future defense contracts and triggering mandatory audits of its data sources and model training practices.
While Anthropic has publicly maintained that it prohibits military applications of its models — including Claude 3 — internal documents obtained by Axios reveal that a Pentagon-affiliated contractor, tasked with developing AI-assisted logistics software, had integrated an unlicensed version of Claude into its system. The contractor, identified only as "Project Sentinel," allegedly bypassed Anthropic’s API access controls by reverse-engineering the model’s output patterns to mimic its behavior without direct API calls.
"This isn’t about whether AI should be used in defense — it’s about accountability," said one senior Pentagon official speaking on condition of anonymity. "We’re not against innovation. But when a foreign-owned data pipeline, trained on scraped public internet content, is used to inform decisions that could affect troop deployments, we have a national security imperative to intervene."
Anthropic, headquartered in San Francisco, has long positioned itself as a responsible AI developer, emphasizing alignment with ethical guidelines and rejecting military contracts outright. In a statement released hours after the summons, the company affirmed its "strict prohibition on direct military use" but acknowledged "the possibility of indirect, unauthorized use by third parties."
"We are cooperating fully with the Department of Defense to investigate how our technology may have been misused," said a spokesperson for Anthropic. "We are also reviewing our contractual safeguards and will implement additional technical barriers to prevent unauthorized access or replication."
The situation has ignited a broader debate within the national security community about the adequacy of current AI governance frameworks. Unlike traditional defense contractors, AI firms like Anthropic, OpenAI, and Google DeepMind operate under commercial licensing models that lack enforceable military compliance clauses. The Pentagon’s threat to invoke supply chain risk status — a designation typically reserved for hardware vendors with suspected ties to adversarial nations — signals a potential paradigm shift in how AI firms are regulated.
Experts warn that without clear legal boundaries, the militarization of commercial AI could lead to unintended consequences, including algorithmic bias in targeting, compromised data integrity, and loss of operational secrecy. "We’re seeing a new kind of arms race — not in missiles, but in models," said Dr. Elena Vasquez, a cybersecurity fellow at the Center for Strategic and International Studies. "The challenge isn’t just technical; it’s institutional. The military needs to develop AI procurement standards as rigorous as its nuclear protocols."
As the meeting concludes, sources indicate that Hegseth has ordered a 30-day review of all AI systems currently in use across the Department of Defense, with Anthropic’s technology at the center of scrutiny. The outcome could set a precedent for how the US government regulates commercial AI in defense — and whether ethical commitments from tech CEOs are enough to safeguard national security.