Anthropic CEO Accuses OpenAI of Ignoring AI Safety Risks Amid Revenue Surge
Anthropic CEO Dario Amodei warns that OpenAI is pursuing aggressive AI scaling without fully grasping the existential risks, even as Anthropic’s revenue grows tenfold. With Nobel Prize-level AI potentially just years away, Amodei’s Responsible Scaling Policy stands in stark contrast to OpenAI’s recent political entanglements.

San Francisco, February 14, 2026 — In a rare public critique of a rival, Anthropic CEO Dario Amodei has accused OpenAI of failing to comprehend the catastrophic risks inherent in its rapid AI development trajectory. Speaking in an internal briefing leaked to The Decoder, Amodei stated, "We’re on the cusp of Nobel Prize-level AI — maybe just one or two years away. But being off by a year in our safety estimates could mean bankruptcy, or worse. I’m not convinced OpenAI has done the math."
Anthropic, the AI safety-focused startup founded by former OpenAI researchers, has seen its annual revenue surge tenfold in the past year, driven by enterprise adoption of its Claude models and growing demand for responsible AI systems. According to company disclosures, the firm has secured over $7 billion in funding since 2021 and is now profitable at scale, a rare feat in the hyper-competitive generative AI sector. Yet despite its financial momentum, Anthropic has deliberately constrained its compute expansion, adopting a cautious "Responsible Scaling Policy" that mandates safety reviews before deploying models exceeding certain capabilities.
Amodei’s remarks come amid growing unease over OpenAI’s strategic direction. Recent reporting by WIRED reveals that OpenAI President Greg Brockman donated millions to former President Donald Trump’s 2024 campaign, justifying the contributions as an effort to "protect humanity from regulatory overreach." The move has sparked internal dissent and public criticism, with ethics experts questioning whether political lobbying aligns with OpenAI’s stated mission of ensuring AI benefits all of humanity.
Anthropic, by contrast, has doubled down on transparency and governance. Its website highlights its Claude’s Constitution — a detailed ethical framework governing model behavior — alongside its updated Responsible Scaling Policy, which includes third-party audits and public reporting thresholds for model capabilities. "We believe safety isn’t a feature you add after the fact," said an Anthropic spokesperson in a statement. "It’s the foundation."
While OpenAI continues to push for aggressive compute scaling — reportedly securing exclusive access to thousands of NVIDIA H100 GPUs — Anthropic has opted for efficiency over raw power. The company’s engineering team has developed novel training techniques that reduce energy consumption by up to 40% per parameter trained, allowing them to achieve high performance without proportional increases in hardware investment.
The rivalry has escalated beyond rhetoric. According to a report on MSN, Anthropic has launched a $20 million public awareness campaign titled "AI Without Borders," aimed at educating policymakers and the public about the dangers of unchecked AI development. The initiative includes digital ads, university partnerships, and a documentary series featuring AI safety researchers from Stanford, MIT, and the Future of Life Institute.
Industry analysts view this as more than a marketing effort. "This is a strategic pivot," said Dr. Lena Ruiz, AI policy fellow at the Center for Security and Emerging Technology. "Anthropic is positioning itself as the ethical alternative — not just technologically, but morally. They’re betting that trust will become the most valuable currency in AI."
Meanwhile, OpenAI has not publicly responded to Amodei’s comments. Its leadership has instead focused on product launches, including the recent rollout of GPT-5 and its enterprise API suite. But insiders suggest internal debates are intensifying. One anonymous engineer told The Decoder: "We’re building something that could change civilization. But are we building it safely — or just fast?"
As the AI race enters its most critical phase, the divergence between Anthropic’s precautionary model and OpenAI’s growth-at-all-costs approach may define the future of artificial intelligence — and whether humanity retains control over its most powerful creation.
