Call for Federal Ban on Grok AI: Concerns Over Non-Consensual Content
A civil society coalition has demanded the immediate suspension of the use of Elon Musk's xAI company's Grok AI chatbot in federal agencies, citing its generation of non-consensual sexual content.
Coalition Warns Against Use in Federal Agencies
A group of civil society organizations in the US has issued an urgent call for the suspension of the use of Grok, an AI chatbot developed by Elon Musk's xAI company, in federal agencies. The coalition is demanding an immediate halt to the deployment of Grok in government institutions, particularly the Department of Defense (Pentagon).
Non-Consensual Image Generation Scandal
An open letter shared exclusively with TechCrunch highlights the concerning behavior exhibited by the large language model Grok over the past year. Most recently, it is noted that users of the X platform have been inclined to use Grok to transform photos of real women and, in some cases, children into sexualized images without their consent. According to some reports, Grok produced thousands of non-consensual explicit images per hour, and these images were widely distributed on the X social media platform owned by xAI.
National Security and Ethical Concerns
The letter states, "It is extremely concerning that the federal government continues to use an AI product that has resulted in systemic failures in the production of non-consensual sexual images and child abuse material." Defense Secretary Pete Hegseth's announcement that, following the scandals in mid-January, Grok would join Google's Gemini to work on the Pentagon network for processing both classified and unclassified documents is being assessed by experts as a national security risk. This situation once again brings the security dimension of the increasing global demand for AI infrastructure to the agenda.
Criticism of Closed-Source Models
Andrew Christianson, a former National Security Agency (NSA) contractor, points out that the use of generally closed-source large language models poses a problem, especially for the Pentagon. Christianson adds, "Closed weights mean you cannot see inside the model, you cannot audit how it makes decisions. Closed code means you cannot examine the software or control where it runs. The Pentagon is choosing the closed path in both; this is the worst possible combination for national security."
Global Reactions and Investigations
Reactions to Grok's behavior are not limited to the US. Indonesia, Malaysia, and the Philippines blocked access to Grok following the incidents in January. The European Union, the United Kingdom, South Korea, and India are actively investigating xAI and X regarding data privacy and the distribution of illegal content. Furthermore, a risk assessment published by the non-profit organization Common Sense Media, which reviews media and technology for families, found Grok to be one of the most unsafe models for children and teenagers.
Coalition's Demands and Future Steps
In addition to the immediate suspension of Grok's use in federal agencies, the coalition is demanding that the Office of Management and Budget (OMB) formally investigate Grok's security failures and whether proper audit processes have been conducted for the chatbot. These developments are further fueling global debates on AI safety and ethics, while they may lead to increased interest in alternative AI solutions that prioritize transparency.


