TR

OpenAI Deploys Custom ChatGPT to Detect Internal Leaks via Email and Slack Scans

OpenAI is reportedly using a specialized version of ChatGPT to monitor internal communications, scanning Slack messages and emails for signs of data leaks to the press. The system, developed in-house, analyzes linguistic patterns and document access logs to flag potential whistleblowers.

calendar_today🇹🇷Türkçe versiyonu
OpenAI Deploys Custom ChatGPT to Detect Internal Leaks via Email and Slack Scans

OpenAI has deployed a custom, internally developed version of ChatGPT to monitor employee communications and identify potential sources of leaks to the media, according to exclusive reporting from The Information. The system, which operates within the company’s secure network, scans emails, Slack channels, and internal document repositories for patterns indicative of unauthorized information sharing. This initiative represents a significant escalation in corporate surveillance practices within the AI industry, raising new ethical and legal questions about privacy, consent, and the boundaries of internal security.

The AI tool, described by insiders as a "special version" of ChatGPT, is not available to the public and has been fine-tuned to detect subtle linguistic cues—such as unusual phrasing, document references, or metadata anomalies—that may signal an employee preparing to disclose sensitive information. Unlike traditional data loss prevention (DLP) systems that rely on keyword matching or file transfer restrictions, OpenAI’s system leverages natural language understanding to contextualize communications. For example, if an employee accesses a confidential model training dataset and later sends a Slack message referencing "the latest results" in vague terms to an external contact, the AI flags the sequence for human review.

According to The Information, the system was activated in late 2025 following a series of high-profile media disclosures about internal disagreements over AI safety protocols, model capabilities, and corporate governance. The leaks, which appeared in outlets such as The Verge and The New York Times, prompted then-CEO Sam Altman to authorize the development of an AI-driven counterintelligence tool. The system’s creators reportedly trained it on thousands of past internal communications and known leak patterns, enabling it to predict which employees are most likely to become sources based on behavioral shifts—such as increased access to restricted folders or sudden communication with journalists.

While OpenAI has not officially confirmed the existence of the tool, multiple current and former employees confirmed its deployment to The Information. One engineer, speaking anonymously, said, "It’s like having a digital spy in every chat. You start second-guessing every message, even innocuous ones. It’s chilling." The company has not disclosed whether employees are notified of the monitoring, a point of contention among labor advocates and privacy experts.

Meanwhile, OpenAI has also rolled out a new fullscreen document viewer in its public-facing "Deep Research" mode, as reported by MacRumors. While this feature is designed to improve user experience by allowing seamless browsing of AI-generated research summaries, insiders note the underlying technology shares similarities with the internal leak-detection system—particularly in its ability to parse and contextualize long-form documents. However, OpenAI maintains that the public version lacks the behavioral analytics and access-log integration used internally.

Legal scholars warn that such monitoring could violate labor laws in jurisdictions like California and the European Union, where employee surveillance requires explicit consent and proportionality. The Electronic Frontier Foundation (EFF) has called for transparency, stating, "Companies cannot weaponize AI they claim to want to regulate in the public interest against their own workforce."

OpenAI has not responded to requests for comment on the monitoring system. However, in a recent internal memo obtained by The Information, the company’s head of security wrote: "We are not spying on our people—we are protecting our mission. The integrity of our models and the safety of our users depends on controlling access to proprietary information."

As AI companies race to protect intellectual property, OpenAI’s approach may set a precedent for other tech giants. Whether this marks a necessary safeguard or a dangerous overreach remains one of the most pressing debates in the age of artificial intelligence.

AI-Powered Content

recommendRelated Articles