TR

Critical Zero-Click Vulnerability Discovered in Claude Extensions Allows Code Execution

A CVSS 10.0-rated zero-click vulnerability has been identified in Claude's extension framework, enabling attackers to execute arbitrary code via calendar event data without user interaction. The flaw stems from structural design weaknesses in privilege escalation and inter-connector trust mechanisms, and remains unpatched as of this report.

calendar_today🇹🇷Türkçe versiyonu
Critical Zero-Click Vulnerability Discovered in Claude Extensions Allows Code Execution

A critical security flaw affecting Anthropic’s Claude AI assistant has been exposed by cybersecurity researchers, revealing a CVSS 10.0-rated zero-click vulnerability that permits remote code execution through manipulated calendar events. The vulnerability, which requires no user interaction beyond having the affected extension enabled, exploits structural weaknesses in how Claude’s extension architecture handles inter-component communication and privilege escalation. According to IT Media, the flaw lies in the lack of proper sandboxing for privileged operations and an overly trusting design between connected services—allowing an attacker to inject malicious payloads via seemingly benign calendar data.

The implications of this vulnerability are severe. Unlike traditional exploits that require phishing or user clicks, this zero-click flaw enables attackers to compromise systems simply by adding a maliciously crafted event to a user’s calendar—whether through email, calendar sync, or third-party integrations. Once triggered, the payload bypasses standard security boundaries, executing arbitrary code with elevated privileges on the host system. This could lead to full system compromise, data exfiltration, lateral movement across networks, or the installation of persistent backdoors—all without the user ever being aware of the intrusion.

Security analysts have highlighted that the root cause is architectural rather than a simple coding error. Claude’s extension framework, designed to facilitate seamless integration with productivity tools such as Google Calendar and Microsoft Outlook, assumes trust between connected connectors without adequate validation or isolation. Normally, sandboxing mechanisms would prevent such direct access to system resources; however, in this case, privileged functions operate outside of these protective layers. This design choice, likely intended to improve performance and user experience, has created a dangerous attack surface.

Despite being disclosed to Anthropic, the vulnerability remains unpatched as of February 2026. The company has not issued a public advisory or provided a timeline for resolution, raising concerns among enterprise users who rely on Claude for sensitive workflows. Security teams are urging organizations using Claude extensions to immediately disable calendar integrations until a fix is deployed. The absence of a patch underscores a troubling trend in AI-driven software: rapid feature deployment often outpaces rigorous security validation, particularly in third-party extension ecosystems.

Experts warn that this vulnerability could become a prime target for state-sponsored actors and organized cybercriminal groups. The combination of zero-click exploitation, high privilege access, and the widespread use of calendar systems makes this one of the most dangerous AI-related vulnerabilities discovered to date. Red team simulations conducted by independent researchers demonstrate that exploitation can be automated and scaled across thousands of targets with minimal effort.

As AI assistants become deeply embedded in corporate and personal digital ecosystems, the need for robust, defense-in-depth security architectures has never been more urgent. This incident serves as a wake-up call for developers and vendors: trust cannot be assumed in interconnected systems, and privilege must be strictly constrained—even in seemingly innocuous features like calendar syncing. Users and administrators are advised to monitor official channels for updates from Anthropic and to consider alternative tools with more transparent and hardened extension frameworks until this flaw is resolved.

For ongoing updates on this vulnerability and similar AI security risks, cybersecurity professionals are encouraged to consult trusted advisories from NIST, CISA, and independent security researchers.

AI-Powered Content

recommendRelated Articles