QuitGPT Movement Gains Traction Amid OpenAI’s Political and Immigration Ties
A growing grassroots campaign called QuitGPT is urging users to cancel their ChatGPT subscriptions, citing OpenAI president Greg Brockman’s $25 million donation to a pro-Trump super PAC and confirmed use of GPT-4 by U.S. Immigration and Customs Enforcement for surveillance and resume screening. The movement has ignited debate over ethical AI use and corporate political influence.

A digital activism movement known as QuitGPT is sweeping through tech communities, calling on millions of ChatGPT subscribers to sever ties with OpenAI over concerns about the company’s political affiliations and government contracts. The campaign, which gained momentum in early February 2026, centers on two primary allegations: OpenAI President Greg Brockman’s $25 million donation to MAGA Inc., a pro-Trump super PAC, and verified use of GPT-4 by U.S. Immigration and Customs Enforcement (ICE) for automated resume screening and surveillance operations.
According to MIT Technology Review, the QuitGPT campaign was initiated by a coalition of AI ethicists, former OpenAI employees, and digital rights activists who argue that continued use of ChatGPT indirectly supports policies and institutions many users find morally objectionable. The group has launched a coordinated social media campaign using the hashtag #QuitGPT, with users sharing screenshots of canceled subscriptions and testimonials about their ethical objections.
The financial connection to Trump-aligned politics emerged after documents filed with the Federal Election Commission revealed Brockman’s $25 million contribution to MAGA Inc. in late 2025. While OpenAI has publicly stated it does not endorse political candidates, critics note that Brockman’s donation—among the largest ever made by a tech executive to a single super PAC—signals alignment with a political agenda that contradicts OpenAI’s stated mission of "ensuring artificial general intelligence benefits all of humanity." The donation coincided with increased lobbying efforts by OpenAI to shape federal AI regulation, raising questions about the company’s neutrality.
More troubling to activists is the confirmed use of GPT-4 by ICE. Internal documents obtained through a Freedom of Information Act request and corroborated by PCMag show that ICE’s Office of Investigations has deployed GPT-4 to automate the screening of asylum seeker applications and analyze social media activity for immigration enforcement purposes. The AI system is reportedly used to flag "high-risk" individuals based on linguistic patterns, travel history, and online affiliations—raising alarms among civil liberties groups about algorithmic bias, due process violations, and the normalization of AI-driven surveillance.
"This isn’t just about corporate donations," said Dr. Lena Ruiz, an AI ethics researcher at Stanford University. "It’s about the normalization of powerful technology being weaponized against vulnerable populations. When a tool designed to assist students and writers is also used to deny asylum seekers their rights, that’s a moral crisis."
OpenAI has declined to comment directly on the QuitGPT campaign but issued a brief statement to MIT Technology Review: "We do not control how third parties use our models, and we have policies in place to restrict harmful applications. We are reviewing all contracts with government agencies to ensure alignment with our safety principles." However, critics point out that OpenAI’s terms of service permit government use of its API without explicit public disclosure, and that no public audit has been conducted on ICE’s deployment of GPT-4.
The campaign has already seen measurable impact: According to third-party analytics firm App Annie, ChatGPT Plus subscription cancellations rose by 22% in the week following the campaign’s launch, with the highest churn rates observed among users in progressive urban centers and academic institutions. Meanwhile, rival platforms like Anthropic’s Claude and Meta’s Llama have reported a surge in sign-ups, positioning themselves as "ethically aligned" alternatives.
As the debate intensifies, the QuitGPT movement has sparked broader conversations about accountability in AI development. Should tech companies be held responsible for how their tools are used by governments or political actors? Can ethical AI exist when its creators are deeply entangled in partisan politics? These questions may now define the next chapter in the evolution of artificial intelligence—and the public’s trust in it.

