AI Assistants Begin Citing Elon Musk's Grokipedia
In addition to ChatGPT, tools like Google's Gemini and AI Overviews are also using Elon Musk's AI-generated encyclopedia Grokipedia as a source in their responses. Experts warn that this situation increases the risk of misinformation.

Grokipedia References Rising on Major AI Platforms
Grokipedia, launched last October by Elon Musk's company xAI and generated by the Grok AI model, is increasingly appearing in the source lists of more AI assistants. According to a report by The Verge, platforms including OpenAI's ChatGPT, Google's Gemini, AI Overviews, and AI Mode, as well as Microsoft's Copilot, are citing Grokipedia, especially when answering niche and specific questions.
What Do the Data Show?
Glen Allsopp, Head of Marketing Strategy and Research at SEO company Ahrefs, stated that in their tests, out of 13.6 million ChatGPT prompts, over 263,000 responses referenced Grokipedia, citing approximately 95,000 different Grokipedia pages. Allsopp added that English Wikipedia was seen in 2.9 million responses. Researcher Sartaj Rajpal from the marketing platform Profound, based on a dataset tracking billions of citations, said Grokipedia receives about 0.01% to 0.02% of all daily ChatGPT citations, and this small share has been steadily increasing since mid-November.
Experts Warn About Misinformation
Analysts point out that unlike Wikipedia, which is edited by human editors through a transparent process, Grokipedia is entirely generated and edited by an AI model. The Grok model has had controversial outputs in the past. Experts emphasize that using a source like Grokipedia carries the risk of spreading misinformation and facilitating biased discourse. Leigh McKenzie, Director of Online Visibility at Semrush, commented, "Grokipedia looks like a veneer of credibility. It might work within its own bubble, but the idea of companies like Google or OpenAI treating Grokipedia as a serious, default reference layer at scale is concerning."
Taha Yasseri, Chair of Technology and Society at Trinity College Dublin, noted that the use of such sources risks reinforcing various biases, errors, or framing issues, warning, "Fluency can easily be mistaken for reliability."
Statements from Companies
OpenAI spokesperson Shaokyi Amdo stated that when ChatGPT searches the web, it aims to utilize a wide range of public sources and perspectives relevant to the user's question. Amdo added that users can see the sources and evaluate them themselves, and that they apply safety filters to reduce the risk of links associated with high-severity harm surfacing. Perplexity spokesperson Beejoli Shah expressed that the company's core advantage is accuracy and that they are focused on it. Google, Anthropic, and xAI did not respond to requests for comment.
These developments reignite debates about how AI systems process information and define reliable sources. The safety and ethical use of AI technologies is becoming increasingly critical on a global scale.


