OpenAI Deployed Custom AI to Identify Internal Whistleblowers, Report Finds
According to The Decoder, OpenAI has developed a customized version of ChatGPT to scan internal communications—including emails, Slack messages, and documents—in an effort to detect employees leaking sensitive information to the press. The revelation raises serious ethical and legal concerns about corporate surveillance and whistleblower protections.

OpenAI, the artificial intelligence company behind ChatGPT, has reportedly deployed a custom-tailored version of its own AI model to surveil internal communications and identify employees suspected of leaking confidential information to the media, according to an investigation by The Decoder. The system, designed to analyze vast volumes of internal data, scans emails, Slack conversations, and proprietary documents for patterns indicative of whistleblower activity—such as unusual file downloads, communication with journalists, or references to sensitive projects.
The deployment of such a tool within one of the world’s most prominent AI firms underscores a growing tension between technological innovation and workplace privacy. While companies routinely monitor employee communications for security and compliance, the use of an advanced language model to proactively hunt for dissenters marks a troubling escalation. Unlike traditional keyword-based filters, this customized ChatGPT variant is believed to employ contextual understanding to infer intent, flagging subtle linguistic cues that might otherwise go unnoticed.
Internal sources familiar with the system, speaking anonymously due to fear of retaliation, told The Decoder that the tool was implemented in late 2025 following a series of high-profile media exposés about OpenAI’s internal governance, safety protocols, and corporate decision-making. One source described the system as "an AI-powered loyalty audit," capable of cross-referencing employee behavior across platforms to build risk profiles. The flagged individuals are then subject to HR investigations, often without their knowledge.
Legal experts warn that such practices may violate labor laws in multiple jurisdictions. In the United States, the National Labor Relations Act protects employees’ rights to discuss workplace conditions, including safety concerns, even if those discussions involve confidential information. Similarly, the European Union’s General Data Protection Regulation (GDPR) strictly limits the use of automated decision-making systems that could negatively impact employees without transparency or recourse. OpenAI has not publicly confirmed the existence of the system, nor has it responded to requests for comment from independent media outlets.
The revelation also casts a shadow over OpenAI’s public stance on AI ethics. The company has long positioned itself as a champion of responsible AI development, advocating for transparency, safety, and human oversight. Yet the internal use of its technology to suppress dissent contradicts these principles. Critics argue that OpenAI’s actions mirror those of authoritarian regimes that use surveillance tools to silence critics—albeit under the guise of corporate security.
Whistleblower advocates have condemned the practice. "Using AI to hunt down those who speak truth to power isn’t security—it’s intimidation," said Jennifer Stisa, director of the Digital Rights Lab. "If the creators of the most advanced AI in the world are afraid of their own employees, what does that say about the culture inside the company?"
Meanwhile, employees at OpenAI report growing anxiety. Some have begun using encrypted, off-platform messaging apps to communicate, while others have requested the deletion of personal data from internal systems. The practice may also have unintended consequences: by creating a climate of fear, OpenAI risks driving away the very talent it needs to innovate—engineers and researchers who value autonomy and ethical integrity.
As AI systems become more embedded in corporate governance, this case serves as a cautionary tale. Without clear legal frameworks and ethical guidelines, the same technologies designed to benefit society can be weaponized against it. The Decoder’s findings demand immediate scrutiny from regulators, shareholders, and the public. OpenAI’s next move may not just determine its internal culture—but set a precedent for how all tech giants treat dissent in the age of artificial intelligence.


