OpenAI Researcher Quits Over ChatGPT Ads, Warns of Facebook-Like Mistakes
A former OpenAI researcher has resigned in protest over the company's decision to introduce advertising to ChatGPT, warning that monetizing user conversations creates unprecedented manipulation risks. Zoë Hitzig argues the company is repeating the surveillance-based business model mistakes of social media giants, exploiting a unique archive of human vulnerability.

OpenAI Researcher Quits Over ChatGPT Ads, Warns of Facebook-Like Mistakes
By Investigative AI Desk | February 12, 2026
In a move that signals deepening internal conflict over the commercialization of artificial intelligence, a key former OpenAI researcher has publicly resigned, citing the company's decision to test advertisements on ChatGPT as the final breach of its original ethical mission. The resignation and accompanying public critique draw a direct parallel to the rise of surveillance-based advertising models pioneered by Facebook, warning that the intimate nature of AI conversations creates a new frontier of user exploitation.
According to a guest essay in the New York Times, authored by former researcher Zoë Hitzig, her departure follows two years of work shaping AI model development, pricing, and early safety policies. Hitzig states she joined OpenAI to help anticipate and mitigate the societal problems AI would create. "This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer," she wrote, as reported in the essay's preview on Reddit.
The "Archive of Human Candor"
The core of Hitzig's warning centers on what she describes as an unprecedented "archive of human candor" built over years of ChatGPT interactions. She argues that users have been uniquely open with the conversational AI, revealing medical anxieties, relationship troubles, and spiritual beliefs under the perceived safety of talking to a neutral entity with "no ulterior agenda."
"Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent," Hitzig contends. This scenario, she suggests, mirrors the evolution of social media platforms, where vast troves of personal data initially gathered for connection were later weaponized for hyper-targeted advertising and behavioral influence.
The False Choice of AI Funding
The public discussion around funding advanced AI, Hitzig argues, has been falsely narrowed to a binary choice: either restrict access to a wealthy elite who can afford high subscription fees, or accept an ad-supported model that inevitably leads to surveillance and manipulation. She labels this a "false choice," asserting that technology companies have other avenues to pursue that could maintain broad access without creating perverse incentives to profile and manipulate users.
While her essay preview does not detail these alternative models, her critique implies a call for structures that decouple essential AI access from either extreme exclusivity or data exploitation. This echoes longstanding debates in tech ethics about finding sustainable, non-predatory business models for essential digital services.
Repeating History's Mistakes
The comparison to Facebook is particularly damning in Silicon Valley circles. Facebook's trajectory—from a novel social network to a global advertising behemoth criticized for eroding privacy, spreading misinformation, and optimizing for engagement at all costs—is often cited as a cautionary tale for new technologies. Hitzig's warning suggests OpenAI is on a similar path, leveraging a uniquely intimate dataset for commercial gain after establishing user trust under a different premise.
This resignation adds to a growing list of internal and external concerns about the direction of leading AI labs. As computational costs for cutting-edge models soar into the billions, pressure to generate massive, continuous revenue streams intensifies. The ad-supported model, proven immensely profitable for search and social media, presents a seemingly logical, yet ethically fraught, solution.
Implications for the AI Industry
Hitzig's public stand is likely to fuel existing debates about AI governance, corporate responsibility, and the ethical limits of monetization. It raises urgent questions: Can the industry develop a "third way" to fund democratized AI access? How should the profound intimacy of human-AI conversation be protected from commercial interests? And will OpenAI's move pressure other AI service providers to follow suit, creating a race to the bottom in user privacy?
The incident also highlights the tension between the original "open" and safety-focused ideals of many AI non-profits and startups and the hard commercial realities they face upon achieving breakthrough success. The path from research project to mainstream product often forces a reckoning with foundational principles.
As AI becomes further embedded in daily life, serving as confidant, tutor, and assistant, the business model choices made today will set powerful precedents. The warning from within OpenAI serves as a stark reminder that the architecture of monetization may prove just as consequential as the architecture of the neural networks themselves.
Source: Analysis based on the guest essay by former OpenAI researcher Zoë Hitzig, as previewed in a New York Times Opinion piece discussed on Reddit.


