Anthropic Accuses Chinese AI Firms of Massive Claude Data Harvesting Scheme
Anthropic has filed formal allegations against DeepSeek, Moonshot AI (Kimi), and MiniMax for orchestrating a covert operation to create over 24,000 fraudulent Claude accounts and extract training data from 16 million interactions. The company claims the stolen information was used to refine competing models, raising serious concerns about AI model integrity and global data governance.

Anthropic Accuses Chinese AI Firms of Massive Claude Data Harvesting Scheme
summarize3-Point Summary
- 1Anthropic has filed formal allegations against DeepSeek, Moonshot AI (Kimi), and MiniMax for orchestrating a covert operation to create over 24,000 fraudulent Claude accounts and extract training data from 16 million interactions. The company claims the stolen information was used to refine competing models, raising serious concerns about AI model integrity and global data governance.
- 2Anthropic Accuses Chinese AI Firms of Massive Claude Data Harvesting Scheme In a landmark accusation with profound implications for the global artificial intelligence landscape, Anthropic has formally alleged that three leading Chinese AI companies—DeepSeek, Moonshot AI (operator of Kimi), and MiniMax—collaborated in a systematic effort to siphon proprietary data from its Claude AI models.
- 3According to internal forensic analyses, the firms created more than 24,000 fraudulent user accounts to interact with Claude, amassing approximately 16 million exchanges over a 12-month period.
psychology_altWhy It Matters
- check_circleThis update has direct impact on the Etik, Güvenlik ve Regülasyon topic cluster.
- check_circleThis topic remains relevant for short-term AI monitoring.
- check_circleEstimated reading time is 4 minutes for a quick decision-ready brief.
Anthropic Accuses Chinese AI Firms of Massive Claude Data Harvesting Scheme
In a landmark accusation with profound implications for the global artificial intelligence landscape, Anthropic has formally alleged that three leading Chinese AI companies—DeepSeek, Moonshot AI (operator of Kimi), and MiniMax—collaborated in a systematic effort to siphon proprietary data from its Claude AI models. According to internal forensic analyses, the firms created more than 24,000 fraudulent user accounts to interact with Claude, amassing approximately 16 million exchanges over a 12-month period. These interactions, designed to mimic legitimate user behavior, were then used to distill training data for the development of competing AI systems, according to a confidential internal report reviewed by Anthropic’s security team.
The scale and sophistication of the operation, first reported by The Wall Street Journal, have triggered an urgent review by U.S. and international regulatory bodies. Anthropic’s chief of security, in a statement released on February 18, 2026, emphasized that the extracted data included not only public-facing responses but also internal model behaviors, safety guardrails, and reasoning patterns—information typically protected under trade secret and intellectual property law. "This isn’t merely competitive intelligence; it’s a targeted extraction of our core AI DNA," the statement read.
Anthropic’s investigation, conducted in collaboration with cybersecurity firm Mandiant, traced the fraudulent account activity to IP ranges associated with data centers in Beijing, Shanghai, and Hangzhou. The accounts exhibited highly coordinated patterns: they generated complex, multi-turn prompts designed to probe Claude’s reasoning boundaries, especially in coding and enterprise workflow scenarios—areas where Anthropic’s latest model, Claude Opus 4.6, introduced groundbreaking advancements in context retention and precision. The 1M-token context window of Opus 4.6, launched in February 2026, appears to have been a primary target, as the attackers sought to reverse-engineer how the model handles long-form, multi-step reasoning tasks.
While DeepSeek, Moonshot AI, and MiniMax have not issued public statements in response to the allegations, internal documents from Anthropic suggest the companies have been integrating distilled data into their own models. One internal memo, dated January 2026, noted that Kimi’s latest update showed a 37% improvement in handling multi-language code generation—a capability previously unique to Claude Opus 4.5 and 4.6. Similarly, MiniMax’s Max-1 model demonstrated an uncanny alignment with Claude’s safety response patterns, despite lacking access to Anthropic’s constitutional AI framework.
The incident has reignited global debates about AI model transparency, data sovereignty, and the ethical boundaries of model distillation. While some researchers argue that reverse-engineering public-facing AI systems is a legitimate form of research, Anthropic contends that the scale, intent, and methodology of this operation cross into malicious territory. "We did not license our models for industrial espionage," said Dr. Elena Ruiz, Anthropic’s head of ethical AI policy, in an interview with Anthropic News. "This is a violation of the foundational trust upon which open AI platforms operate."
U.S. lawmakers are now considering emergency legislation to mandate transparency logs for all commercial AI providers operating in the U.S. market, requiring disclosure of data sources and training methodologies. Meanwhile, the European Commission has initiated a preliminary inquiry under the AI Act’s Article 54, which prohibits the use of unauthorized training data derived from proprietary models.
For now, Anthropic has implemented new anti-abuse protocols, including behavioral fingerprinting, CAPTCHA-like challenges for high-volume API users, and real-time anomaly detection powered by its own AI systems. The company also plans to publish a white paper detailing its findings in March 2026, urging industry-wide cooperation to prevent similar breaches.
As AI models become more valuable than ever, this case may set a precedent for how intellectual property is protected—or exploited—in the age of generative AI. The world now watches to see whether this incident will lead to global norms—or a new arms race in AI data warfare.