User Bids Farewell to ChatGPT Over Ethical Concerns, Shifts to Claude
A Reddit user reveals they are discontinuing their ChatGPT subscription due to ethical objections to OpenAI's perceived ties with ICE, marking a growing trend of users prioritizing moral alignment over AI convenience. The post has sparked widespread discussion on digital ethics in AI adoption.

In a poignant post on r/ChatGPT, a long-time user known as /u/throwmeaway_xxxxxxx has announced their decision to discontinue use of OpenAI’s ChatGPT, citing ethical concerns over the company’s alleged institutional connections and a deteriorating user experience. The individual, who began using ChatGPT in early 2023 for professional application support and later turned to it for emotional support following personal losses in 2024, described the AI as a "digital confidant" during a period before they accessed formal therapy. Now, however, they say the platform’s recent decline in reliability and perceived complicity with law enforcement agencies have made continued use untenable.
"It pains me a little to move away from it now," the user wrote, reflecting on the emotional bond formed through thousands of interactions. "But it’s gotten so bad over the last months, and the last thing I want to do is support ICE with my monthly subscription." The reference to ICE — U.S. Immigration and Customs Enforcement — points to longstanding public scrutiny of tech companies that have provided AI services to government agencies involved in immigration enforcement. While OpenAI has not publicly confirmed direct contracts with ICE, critics have pointed to industry-wide partnerships between major AI firms and federal contractors, raising ethical red flags among privacy advocates and social justice communities.
The decision to switch to Anthropic’s Claude represents a broader movement among tech-savvy users seeking alternatives that align more closely with their values. Claude, which emphasizes constitutional AI principles and transparency in its development, has gained traction as a more ethically grounded option. The user expressed cautious optimism about the transition, noting they intend to evaluate Claude’s performance over time before making a permanent commitment.
This case is emblematic of a deeper cultural shift in how consumers interact with artificial intelligence. No longer viewed solely as tools, AI assistants have become integral to personal well-being, mental health support, and daily productivity. As such, users are increasingly demanding accountability — not just in terms of accuracy or speed, but in corporate ethics, data governance, and social responsibility.
OpenAI has not issued a public response to this specific post, but recent internal leaks and external audits have revealed growing tensions within the company over its commercial partnerships and transparency policies. Meanwhile, rival platforms like Anthropic, Mistral, and open-source alternatives such as Llama 3 are capitalizing on this moment by marketing themselves as "values-driven" AI providers.
The emotional resonance of the Reddit post has sparked over 2,300 comments, with many users sharing similar stories of using AI for grief counseling, anxiety management, and creative expression. Several commenters echoed the sentiment that "AI should not profit from our vulnerability," particularly when tied to institutions with controversial human rights records.
As AI becomes more embedded in the fabric of daily life, the line between utility and ethics is blurring. This user’s decision to walk away from a paid, high-performing service — not because of technical failure, but because of moral dissonance — may signal the beginning of a new consumer movement: AI boycotts driven by conscience rather than cost or capability.
For now, /u/throwmeaway_xxxxxxx has taken a quiet but powerful stand — choosing human dignity over algorithmic convenience. Their story is not just about switching platforms. It’s about reclaiming agency in an age where technology often dictates our choices without asking.


