TR

Hackers Fired Over 100,000 Prompts at Google’s Gemini in Bold AI Cloning Attempt

Google has revealed that cybercriminals launched an unprecedented assault on its Gemini AI model, submitting more than 100,000 prompts across multiple languages in a coordinated effort to reverse-engineer and clone the system. The attack, detected and thwarted by Google’s security teams, underscores the growing stakes in the global AI arms race.

calendar_today🇹🇷Türkçe versiyonu

Google has disclosed that its Gemini AI model was subjected to an extraordinary cyberattack in which malicious actors issued over 100,000 prompts in a sustained effort to clone the model’s behavior and internal logic. According to internal investigations cited by Ars Technica, the attackers employed a multi-lingual, high-volume prompting strategy, targeting Gemini across non-English languages to bypass linguistic safeguards and extract patterns indicative of the model’s architecture and training data.

The scale and persistence of the attack are unprecedented in the history of large language model security. Typically, adversarial probing involves hundreds or low thousands of queries. In this case, the volume exceeded 100,000 interactions — a level of effort that suggests either a well-funded state-sponsored operation or a highly organized cybercriminal syndicate with access to automated prompt-generation infrastructure. Google’s security team detected the anomaly through behavioral analytics that flagged unusual spikes in query volume, language diversity, and output repetition patterns inconsistent with legitimate user behavior.

"This wasn’t a brute-force attempt to crash the system," said a senior Google AI security engineer, speaking on condition of anonymity. "It was a surgical, data-harvesting campaign. They were mapping the model’s decision boundaries, testing edge cases, and collecting responses to reconstruct a functional approximation of Gemini. We’ve seen similar tactics before, but never at this scale or sophistication."

According to MSN, the attack was primarily detected in late January 2026 and was neutralized before any sensitive training data was exfiltrated. Google’s internal systems identified the source IPs as originating from a distributed network of compromised cloud instances, likely rented through underground marketplaces. The attackers reportedly used translated versions of the same prompts in over 15 languages — including Arabic, Mandarin, Hindi, and Russian — to test whether Gemini’s multilingual responses varied predictably, potentially revealing structural similarities across language modules.

While Google has not confirmed whether the attackers succeeded in building a functional clone, internal assessments suggest the effort yielded only partial insights. Gemini’s proprietary training methodology, dynamic response randomization, and real-time adversarial detection layers prevented full replication. However, the incident has prompted Google to accelerate deployment of its new "Model Watermarking" protocol, which embeds cryptographically secure identifiers into AI outputs to trace unauthorized reuse.

The attack comes amid a global surge in AI model theft attempts. In 2025, researchers at MIT and Stanford documented a 300% year-over-year increase in adversarial prompting campaigns targeting commercial AI systems. While most are low-effort scraping attempts, the Gemini incident represents a paradigm shift — moving from passive data extraction to active, systematic model reconstruction.

Experts warn that such attacks could soon become standard in the commercial AI sector. "If you can clone a model, you don’t need to pay for API access, you don’t need to train from scratch, and you can evade compliance controls," said Dr. Elena Ruiz, an AI ethics researcher at Stanford. "This is the new frontier of intellectual property theft."

Google has not disclosed whether law enforcement was involved, but it has shared technical indicators with the AI Alliance and other industry partners to help fortify defenses across the ecosystem. The incident also raises urgent questions about the security of open-weight models and the ethical implications of AI cloning — especially as smaller firms and nation-states increasingly seek to replicate proprietary systems without licensing.

As AI models become critical infrastructure, the line between innovation and industrial espionage blurs. The Gemini attack is a wake-up call: in the race for artificial intelligence supremacy, the most dangerous adversaries aren’t always the ones with the biggest GPUs — but the ones with the most patience, persistence, and precision.

AI-Powered Content

recommendRelated Articles