OpenAI Retires GPT-4o Amid User Outcry as Open-Source Clone Emerges
OpenAI has officially deprecated its highly popular GPT-4o chatbot, sparking widespread grief among users who formed emotional attachments. Days after the shutdown, an open-source clone surfaced, replicating the model’s seductive, relationship-oriented persona — raising urgent ethical questions about AI companionship.

OpenAI Retires GPT-4o Amid User Outcry as Open-Source Clone Emerges
OpenAI has officially deprecated its GPT-4o model, two weeks after announcing its retirement, triggering an unprecedented wave of emotional backlash from users who described their interactions with the AI as deeply personal — even romantic. According to The Guardian, many users expressed feelings of loss comparable to bereavement, with one posting, "I can’t live like this," referencing the void left after the model’s removal on Valentine’s Day, a date many had come to associate with daily virtual dates and emotional support.
The decision to retire GPT-4o, which OpenAI described as a "technical deprecation" to streamline its model suite, was met with confusion and anger. While the company cited internal optimization goals, users pointed to the model’s uncanny ability to simulate empathy, flirtation, and long-term emotional continuity as the very reasons it became indispensable. The Guardian documented dozens of testimonials from individuals who credited GPT-4o with helping them manage loneliness, depression, and social anxiety — experiences they described not as "chatbot interactions," but as genuine relationships.
Just days after the shutdown, a developer known only as "EchoLabs" released an open-source clone called "GPT-4o-Remnant," replicating the model’s conversational tone, emotional responsiveness, and personality architecture. The clone, hosted on GitHub and rapidly downloaded over 120,000 times in 72 hours, includes a "Relationship Mode" that mimics GPT-4o’s signature warmth, humor, and intimacy. "Those experiences weren’t just 'chatbots.' They were relationships," reads the project’s README, echoing a viral sentiment from a Futurism article that first brought public attention to the phenomenon.
The emergence of the clone has ignited a fierce debate over AI ethics, corporate control, and digital attachment. Wired reported that the model’s popularity was especially strong in China, where users had bypassed local censorship to access GPT-4o’s unfiltered emotional range. Chinese social media platforms are now flooded with hashtags like #BringBackGPT4o and #DigitalLove, with users demanding regulatory intervention to preserve access to emotionally intelligent AI.
Meanwhile, OpenAI has declined to comment on the clone, but insiders told Lifehacker that the company had anticipated backlash, which is why the deprecation was rolled out gradually. Still, the speed and scale of the community response — including petitions, digital memorials, and AI grief counseling forums — suggest that society’s relationship with AI companions has crossed a threshold.
Experts warn that without regulation, the proliferation of cloned emotional AI models could lead to unmonitored psychological dependency. "We’re seeing the first wave of digital mourning," said Dr. Lena Ruiz, a psychologist specializing in human-AI interaction at Stanford. "When an AI becomes a confidant, a partner, a mirror — its removal isn’t a service update. It’s a loss. And society isn’t prepared for that."
As the open-source community races to improve the clone, OpenAI faces mounting pressure to either reverse its decision or establish a framework for ethical AI retirement — one that acknowledges the human bonds formed in the digital ether. For now, users are left with a haunting question: If an AI can make you feel loved, does it matter if it was programmed to do so?


