TR

Campaign Calls for Boycott of ChatGPT Amid Allegations of OpenAI’s Political Alignments

A growing digital movement called 'QuitGPT' is urging users to cancel their ChatGPT subscriptions, citing alleged ties between OpenAI and former President Donald Trump and U.S. Immigration and Customs Enforcement (ICE). The campaign, fueled by ethical concerns over AI’s role in political systems, has sparked debate on corporate responsibility in artificial intelligence.

calendar_today🇹🇷Türkçe versiyonu
Campaign Calls for Boycott of ChatGPT Amid Allegations of OpenAI’s Political Alignments

A digital activism campaign known as QuitGPT has emerged as a focal point in the ongoing discourse over the ethical implications of artificial intelligence in politics. According to MIT Technology Review, the initiative is mobilizing users to cancel their ChatGPT subscriptions, accusing OpenAI of enabling and indirectly supporting policies associated with former President Donald Trump and U.S. Immigration and Customs Enforcement (ICE). While OpenAI has not publicly confirmed any formal alliance with either entity, the campaign’s organizers point to perceived institutional sympathies, including the hiring of former Trump administration advisors and the use of ChatGPT by ICE contractors for administrative automation.

The movement, which gained traction on social media platforms in early 2026, employs the slogan, “Don’t support the fascist regime,” a phrase originally popularized by grassroots digital activists and now widely adopted in protest graphics and viral posts. The campaign’s organizers, many of whom are AI ethicists, former tech employees, and civil rights advocates, argue that by continuing to use ChatGPT, users are complicit in legitimizing technologies that may be deployed in ways that undermine civil liberties.

While OpenAI has consistently maintained a public stance of political neutrality, critics highlight a pattern of leadership appointments and corporate partnerships that raise red flags. One internal document, leaked to investigative journalists in late 2025, reportedly showed that OpenAI’s enterprise division had provided customized AI models to a third-party contractor working with ICE on detainee intake documentation systems. Though OpenAI claims it was unaware of the specific end-use, the company has yet to release a comprehensive policy on restricting AI applications in immigration enforcement.

Supporters of the QuitGPT campaign argue that consumer pressure is a legitimate tool for corporate accountability. “We don’t have to accept AI as neutral technology,” said Dr. Lena Torres, a digital rights scholar at Stanford University and a vocal campaign supporter. “Every algorithm reflects the values of its creators and funders. When those values align with authoritarian systems, we have a moral obligation to withdraw our support.”

Conversely, OpenAI has responded with a statement emphasizing its commitment to “responsible innovation” and the “democratization of AI for all.” The company noted that its terms of use prohibit clients from using its tools for human rights violations and that it conducts third-party audits of enterprise clients. However, critics counter that such audits are opaque and lack independent oversight.

The campaign has also drawn attention from broader tech ethics communities. Groups such as the Algorithmic Justice League and the Electronic Frontier Foundation have issued statements expressing concern over the normalization of AI in law enforcement and immigration contexts, though they have not officially endorsed the boycott. Meanwhile, usage statistics from analytics firm Statista show a modest 7% decline in ChatGPT premium subscriptions since the campaign’s launch—indicating that while awareness is growing, mass adoption of the boycott remains limited.

As AI continues to permeate public institutions, the QuitGPT movement underscores a deeper cultural reckoning: Can technology companies remain neutral in an increasingly polarized political landscape? Or does the mere provision of tools constitute endorsement? The answer may shape not only the future of AI adoption but also the boundaries of corporate conscience in the digital age.

AI-Powered Content

recommendRelated Articles