OpenAI Researcher Quits Over ChatGPT Ads, Warns of Facebook-Like Mistakes
A former OpenAI researcher has resigned, warning that the company's decision to test ads on ChatGPT mirrors the exploitative path of social media giants. Zoë Hitzig argues that monetizing users' most intimate conversations creates unprecedented manipulation risks. She rejects the notion that advertising is the only viable funding model for advanced AI.

OpenAI Researcher Quits Over ChatGPT Ads, Warns of Facebook-Like Mistakes
By Investigative AI Desk | February 12, 2026
In a scathing public resignation, a former OpenAI researcher has accused the artificial intelligence pioneer of repeating the foundational mistakes of social media platforms like Facebook, prioritizing advertising revenue over user safety and ethical integrity. The warning comes as OpenAI begins testing advertisements within its flagship ChatGPT product, a move that has ignited fierce internal and external debate about the future of AI funding and its societal impact.
According to a guest essay in the New York Times, Zoë Hitzig, who spent two years as a researcher at OpenAI, resigned this week. Her departure was a direct response to the company's new advertising initiative. Hitzig stated she once believed she could help "get ahead of the problems" AI would create but now believes OpenAI has "stopped asking the questions" she was hired to help answer.
The Unprecedented Archive of Human Candor
Central to Hitzig's critique is the unique nature of the data ChatGPT has amassed. She describes it as an "archive of human candor that has no precedent," built on a user belief that they were conversing with a neutral entity free from commercial agenda. Users have, for years, confided in chatbots their deepest secrets: medical anxieties, relationship turmoil, spiritual doubts, and existential fears.
"Advertising built on that archive," Hitzig writes, "creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent." This, she argues, creates a surveillance and profiling capability far more intimate and potent than the behavioral advertising models of traditional social media, which primarily track clicks, likes, and shares.
Rejecting a "False Choice"
The prevailing debate around funding advanced AI often presents a binary, and bleak, option. According to Hitzig's analysis, the standard framing is a choice between two evils: either restrict world-changing technology to a wealthy elite who can afford high subscription fees, or accept an ad-based model that inherently exploits user psychology.
Hitzig firmly rejects this as a "false choice." She contends that technology companies, especially those with the resources and influence of OpenAI, can and must pursue alternative models. These models, she suggests, should keep powerful AI tools broadly accessible while systematically dismantling the corporate incentives to "surveil, profile and manipulate its users." While her essay does not detail specific alternatives, the implication is a call for structural innovation in business models, potentially involving non-profit structures, public-private partnerships, or novel licensing frameworks.
Echoes of Social Media's Fall from Grace
The parallels to the evolution of platforms like Facebook and Google are stark. These companies initially promised connection and information access, only to gradually build empires on sophisticated advertising engines that critics argue have eroded privacy, amplified misinformation, and optimized for engagement at all costs. Hitzig's warning suggests OpenAI is at a similar inflection point, where mission-driven idealism risks being subsumed by the relentless logic of shareholder returns and market dominance.
The introduction of ads represents a significant philosophical shift for OpenAI. Founded as a non-profit with a charter to ensure artificial general intelligence (AGI) benefits all of humanity, the company later created a capped-profit arm to attract the capital needed for its massive computational requirements. The move to advertising is seen by observers as the next step in commercializing its technology, raising questions about how its original governing principles will be upheld.
Industry at a Crossroads
Hitzig's resignation is more than a personal career decision; it is a canary in the coal mine for the entire generative AI industry. As models become more capable and integrated into daily life, the question of how to fund their astronomical development and operational costs without creating destructive externalities is paramount. Her action highlights a growing tension within AI labs between researchers focused on safety and alignment and executives and investors focused on growth and monetization.
The coming months will test whether OpenAI and its competitors can navigate this tension. Will they develop the "tools to understand" the novel manipulation risks Hitzig describes, or will the economic imperative to monetize user intimacy prove too strong? The path they choose will likely define the public's trust in, and the societal impact of, the next generation of artificial intelligence.
Source: Analysis based on a guest essay by former OpenAI researcher Zoë Hitzig published in The New York Times.


