Ireland Launches Landmark GDPR Probe Into X’s Grok AI and Deepfake Content
Ireland’s Data Protection Commission has launched a sweeping investigation into Elon Musk’s platform X, focusing on AI-generated deepfakes and the Grok chatbot’s compliance with GDPR. The probe examines potential violations related to user consent, data minimization, and the proliferation of harmful synthetic media.

On February 17, 2026, Ireland’s Data Protection Commission (DPC) announced a formal investigation into Elon Musk’s social media platform X, targeting its AI-powered chatbot Grok and the platform’s handling of AI-generated deepfake content. The probe, designated XIUC, marks one of the most significant regulatory actions under the EU’s General Data Protection Regulation (GDPR) since its implementation in 2018. According to the DPC’s official press release, the investigation centers on whether X has violated core GDPR principles by processing personal data without lawful basis, failing to implement adequate safeguards against harmful synthetic media, and exposing users—particularly minors—to manipulated content that could cause reputational, psychological, or financial harm.
The DPC, as the lead supervisory authority for X due to its EU headquarters in Ireland, is examining multiple interconnected issues. First, the commission is scrutinizing how Grok, X’s AI chatbot, ingests, stores, and generates responses using personal data scraped from public and private user interactions across the platform. TechCrunch reports that internal documents obtained by regulators suggest Grok may be training on user-generated content—including images, audio, and text—without explicit consent, potentially breaching Article 6 and Article 9 of GDPR regarding lawful processing and sensitive data.
Second, the investigation focuses on the proliferation of AI-generated deepfakes on X, including manipulated videos and audio clips of public figures, political candidates, and ordinary users. According to DW, the DPC has received over 200 verified complaints from individuals whose likenesses were used without permission to create sexually explicit or defamatory deepfakes. The commission is assessing whether X’s content moderation systems, which rely heavily on automated filters, meet GDPR’s requirement for ‘data protection by design and by default’ (Article 25). Critics argue that the platform’s minimal human oversight and algorithmic bias exacerbate the risk of harm, especially to vulnerable populations.
Compounding the issue, EconoTimes notes that Grok’s real-time response capabilities may be generating new deepfakes on demand. For instance, users have reported being able to prompt Grok to create photorealistic images of non-consenting individuals in compromising scenarios. The DPC is evaluating whether this functionality constitutes automated decision-making under Article 22, which requires human intervention for decisions with legal or similarly significant effects. The commission is also investigating whether X has conducted a Data Protection Impact Assessment (DPIA) as mandated for high-risk processing activities involving AI and biometric data.
Legal experts warn that if violations are substantiated, X could face fines of up to 4% of its global annual revenue—potentially exceeding $10 billion. The DPC has already requested detailed technical documentation from X’s EU team, including logs of data ingestion, user consent mechanisms, and AI training datasets. Musk’s team has not yet issued a formal public response, but insiders suggest the company is preparing a legal defense centered on free speech protections under the EU’s Digital Services Act.
This investigation sets a precedent for how global regulators will hold tech giants accountable for AI-driven content abuse. With the EU’s AI Act set to fully take effect in 2027, the DPC’s actions signal an aggressive enforcement posture. Privacy advocates have hailed the probe as a necessary step to protect digital identity in the age of synthetic media. Meanwhile, civil society groups are calling for mandatory watermarking of AI-generated content and real-time user notification systems on platforms like X.
As the investigation unfolds, the global tech community watches closely. The outcome could reshape how AI systems are designed, deployed, and regulated—not just in Europe, but worldwide.

