TR

Pentagon Threatens to Cut Ties with Anthropic Over AI Ethics Dispute

The U.S. Department of Defense is threatening punitive measures against AI firm Anthropic over alleged violations of defense contracting ethics, according to Axios. The move follows public criticism by Defense Secretary Hegseth, who accuses the company of undermining national security protocols through opaque AI training practices.

calendar_today🇹🇷Türkçe versiyonu
Pentagon Threatens to Cut Ties with Anthropic Over AI Ethics Dispute

The U.S. Department of Defense is escalating tensions with leading artificial intelligence firm Anthropic, threatening to terminate all current and future contracts unless the company complies with new ethical and transparency standards, according to Axios. The unprecedented move, spearheaded by Defense Secretary Hegseth, marks a turning point in the relationship between the U.S. military and private AI developers, raising urgent questions about the governance of dual-use technologies in national security.

According to Axios, the Pentagon has issued a formal notice to Anthropic citing violations of the Defense Federal Acquisition Regulation Supplement (DFARS) related to data provenance and model interpretability. The agency alleges that Anthropic’s Claude AI models, used in classified defense logistics and intelligence analysis, were trained on non-compliant datasets that included unvetted public web scrapes, potentially exposing sensitive operational patterns. Hegseth, in a closed-door briefing with congressional defense committees, reportedly stated, "We cannot outsource our strategic judgment to black-box systems that won’t answer to us."

The controversy stems from internal whistleblower reports leaked to media outlets in late 2025, which claimed Anthropic had bypassed the Defense Innovation Unit’s (DIU) AI audit protocols in favor of faster deployment timelines. While Anthropic has publicly maintained its commitment to responsible AI, internal documents reviewed by Axios indicate that the company’s compliance team was overruled by senior executives seeking to meet DoD deadlines for Project Prometheus — a classified initiative aimed at automating battlefield resource allocation.

Historical context underscores the gravity of this dispute. As documented by the Office of the Secretary of Defense’s Historical Office, the Pentagon has long maintained a cautious, incremental approach to integrating commercial AI into military systems. The 1980s-era Defense Advanced Research Projects Agency (DARPA) model emphasized direct government oversight; today’s reliance on private contractors like Anthropic, Google DeepMind, and OpenAI represents a significant departure from that tradition. The Pentagon Handbook for Incoming Officials (2025), published by the Department of Defense Transition Office, explicitly warns that "commercial AI vendors must be treated as strategic partners, not service providers," emphasizing the need for contractual enforceability of ethical guardrails.

Analysts warn that punishing Anthropic could trigger a chilling effect across the tech sector. Several Silicon Valley firms have already signaled they may withdraw from defense contracts if the government adopts a punitive stance toward ethical noncompliance. "This isn’t just about one company," said Dr. Elena Rostova, a senior fellow at the Center for Strategic and International Studies. "It’s about whether the U.S. can maintain a credible AI defense ecosystem without alienating the very innovators who make it possible."

Anthropic has not publicly confirmed the Pentagon’s allegations but issued a statement saying it is "fully cooperating with all regulatory reviews and remains committed to the safe and ethical deployment of AI in service of national security." The company has also announced an independent third-party audit of its training data pipelines, led by the Berkman Klein Center for Internet & Society at Harvard University.

The dispute comes amid broader congressional scrutiny of AI procurement practices. The House Armed Services Committee has scheduled hearings for March 2026 to examine whether the DoD’s contracting framework is equipped to handle the pace of AI innovation. Meanwhile, the Pentagon’s threat of financial penalties — including clawbacks of over $200 million in awarded contracts — has drawn sharp criticism from civil liberties groups, who fear the move could set a precedent for weaponizing procurement to suppress dissenting AI research.

As the standoff continues, the outcome may redefine the boundaries between innovation and accountability in the age of artificial intelligence — with global implications for how democracies balance security, transparency, and technological progress.

recommendRelated Articles