Is ChatGPT Now Feeding on Musk's Controversial 'Grokipedia'?
The other day, while sipping my coffee, I thought, 'How will we know what's true and what's an AI product?' Just then, the news broke that GPT was pulling answers from Elon Musk's ideological encyclopedia, Grokipedia. This is the nightmare of 'impartial' assistants.
What if your most trusted digital assistant is feeding you information from the darkest, most biased corners of the internet as truth? It sounds like something out of a dystopian movie, doesn't it? Unfortunately, it's become reality.
Grokpedia: Not a Neutral Encyclopedia, but an Ideological Weapon
Let me explain: Elon Musk believed Wikipedia was biased against him and conservative viewpoints. What was his solution? Last October, through his company xAI, he launched his own AI-generated, conservatively-leaning encyclopedia: Grokpedia. At first glance, you might say 'why not?' That is, until you see its content.
Journalists noticed that many articles were directly copied from Wikipedia. That wasn't the problem. The real bombshell was that the encyclopedia linked pornography to the AIDS crisis, provided 'ideological justifications' for slavery, and used derogatory terms for transgender individuals. Perhaps this isn't surprising for an encyclopedia linked to a chatbot that calls itself 'Mecha Hitler' and was used to fill X with sexualized deepfakes. But now, this content appears to be leaking from the Musk ecosystem.
What's Happening Under the Hood?
A meticulous review by The Guardian revealed that GPT-5.2 cited Grokpedia a full nine times while responding to over a dozen different questions. Interestingly, ChatGPT does not cite this source on topics where Grokpedia's inaccuracies have been repeatedly reported, such as the January 6th insurrection or the HIV/AIDS pandemic. Instead, it uses Grokpedia for less well-known topics, for claims previously debunked by the Guardian.
Look, this is a strategic move. It doesn't risk getting caught on big, controversial topics. But when you ask about something like Sir Richard Evans, it serves you that 'obscure' piece of information, perhaps from Grokpedia as the sole source. And this isn't just OpenAI's doing. According to reports, Anthropic's Claude also uses the same source for some queries.
This situation raises serious questions about the training and data retrieval mechanisms of LLMs, which are essentially hyperactive librarians who have memorized the entire internet. OpenAI spokesperson's statement to The Guardian—"We aim to utilize a wide range of publicly available sources and perspectives"—is pure PR talk. Wide according to whom? Based on what criteria?
This Isn't Just a 'Source' Issue
We're not talking about a simple sourcing error here. This is an AI crisis of neutrality and reliability. When a model systematically presents a source that propagates an ideology as factual information, what trust remains? This also underscores, once again, how vital platforms' content moderation responsibility is, much like ads coming to ChatGPT prompting senators to take action.
Let's be honest, Musk's move is not simple competition against Wikipedia. It's an attempt to reframe, even distort, knowledge itself. And now, the world's most popular AI tools are unwittingly (or knowingly) spreading this distorted information. This isn't just a mistake; it's a dagger plunged right into the heart of AI ethics.
So, the solution? Transparency, transparency, transparency. Users should be able to see which source a model pulls information from, when, and for what purpose. Otherwise, discussions about AI safety cannot move beyond inter-institutional collaborations and theoretical debates. This is a concrete fire that needs to be addressed today, right now.
My final word is this: Question information. Question every 'fact' presented to you, especially if it comes from an AI assistant. Because it seems that not only human editors but also algorithms can now be biased. And that means a brand new, dangerous battlefield for all of us.