State-Sponsored Hackers Use AI to Target Defense and Critical Infrastructure
New intelligence reveals state-backed cyber actors from China, Iran, North Korea, and Russia are leveraging generative AI models like Google’s Gemini to launch sophisticated phishing campaigns and rehearse attacks on defense contractors and critical infrastructure. Leaked documents confirm China’s testing of cyber operations against regional neighbors, while Google’s Threat Intelligence Group warns of an escalating AI-driven threat landscape.

State-sponsored cyber actors from China, Iran, North Korea, and Russia are increasingly deploying artificial intelligence to amplify the scale, speed, and sophistication of their cyber operations, according to a new report from Google’s Threat Intelligence Group (GTIG). Leveraging advanced generative AI models such as Gemini, these actors are automating the creation of highly convincing phishing emails, generating malware code, and even mimicking the writing styles of trusted insiders to bypass human and technical defenses. The trend marks a dangerous evolution in cyber warfare, where AI no longer serves merely as a tool—but as a strategic force multiplier for nation-state actors.
Google’s quarterly AI Threat Tracker, released this week, documents a 300% increase in AI-assisted phishing campaigns targeting defense contractors, government agencies, and research institutions since late 2025. These campaigns often begin with AI-generated emails that reference real internal projects, using publicly available data from corporate websites and LinkedIn profiles to personalize messages. One campaign, traced to a group linked to Iran’s Islamic Revolutionary Guard Corps (IRGC), used Gemini to draft over 12,000 targeted emails in under 48 hours, achieving a 17% click-through rate—far higher than traditional phishing efforts.
Meanwhile, leaked technical documents obtained by The Record reveal that Chinese cyber units have been actively rehearsing attacks on the critical infrastructure of neighboring countries, including power grids, telecommunications networks, and transportation systems. The documents, dating from late 2025, contain detailed simulation scripts, vulnerability maps, and payload deployment timelines targeting systems in Southeast Asia and the Indian subcontinent. According to Alexander Martin, senior analyst at Recorded Future, "These aren’t theoretical exercises. The level of technical specificity suggests operational readiness, possibly in preparation for geopolitical escalation."
North Korea’s Lazarus Group and Russia’s Sandworm have also integrated AI into their malware development pipelines. Lazarus, known for its financial heists and ransomware campaigns, now uses AI to evade detection by security tools trained on historical attack patterns. Sandworm, linked to the 2015 and 2016 Ukrainian power grid attacks, has begun using AI to dynamically alter the behavior of its Industroyer2 malware, making signature-based detection nearly impossible.
While Google’s report highlights the growing threat, it also underscores a critical gap in global defenses: most cybersecurity systems are still designed to detect known signatures or anomalies, not AI-generated content that evolves in real time. "We’re in a new arms race," said Dr. Elena Voss, a cybersecurity expert at Stanford’s Center for Internet and Society. "The attackers are using AI to outpace our ability to understand, let alone counter, their tactics."
The implications extend beyond national security. Defense sector employees, researchers, and even journalists are now primary targets. According to The Guardian’s investigation into recent intrusions, hackers have successfully compromised email accounts of personnel at defense contractors in the U.S., UK, and Australia, using AI-crafted lures that mimic HR notifications or internal project updates.
As AI models become more accessible—even through compromised cloud services—experts warn that the barrier to entry for state-sponsored actors is shrinking. While China’s operations appear most methodical and infrastructure-focused, Iran and North Korea are prioritizing disruption and financial gain, and Russia continues to blend cyber espionage with influence operations. The convergence of these strategies, powered by AI, signals a new era of cyber conflict—one where the battlefield is invisible, the weapons are self-learning, and the targets are everywhere.
U.S. and EU officials are now urging international cooperation on AI cybersecurity standards, but progress remains slow. Without coordinated attribution, export controls on dual-use AI tools, and real-time threat intelligence sharing, the world risks entering a period of uncontrolled digital escalation.


