Some answers generated by OpenAI’s ChatGPT are now referencing Grokipedia, an AI-generated encyclopedia built by Elon Musk’s startup xAI, according to a report published by The Guardian. The appearances are limited but notable, marking a rare instance where a major AI assistant explicitly cites content produced by a rival AI system rather than traditional human-curated sources.
Grokipedia launched in October as part of xAI’s effort to build an alternative to Wikipedia. Unlike Wikipedia, which relies on volunteer editors and layered moderation, Grokipedia is generated entirely by artificial intelligence. xAI has argued that this approach allows faster scaling and reduces editorial bias, though critics question how effectively an AI-only system can self-correct errors or contextual nuance.
According to The Guardian, ChatGPT referenced Grokipedia nine times across responses to more than a dozen test queries. These citations did not appear when ChatGPT was asked about widely covered or high-profile subjects. Instead, they surfaced in responses to more obscure historical details or lesser-known biographical claims, suggesting that Grokipedia may be filling informational gaps where traditional sources are sparse or ambiguous.

The pattern implies selective sourcing rather than broad reliance. ChatGPT appears to treat Grokipedia as one of many publicly available references rather than a primary authority. Still, the fact that an OpenAI system is drawing from a Musk-backed AI encyclopedia highlights the increasingly circular nature of information flows in the AI ecosystem, where models may learn from or cite outputs produced by other models.
The phenomenon is not isolated. The Guardian also reported that Anthropic’s Claude AI has, in some cases, referenced Grokipedia, indicating that multiple large language models may be independently encountering and using the same AI-generated material when crawling public data. This raises questions about how models evaluate credibility when traditional editorial signals are absent.
In a statement to The Guardian, OpenAI said ChatGPT aims to draw from a broad range of publicly available sources and viewpoints, and that safety filters are applied to reduce exposure to harmful material. The company emphasized that citations are shown to indicate which sources informed a response. Anthropic declined to comment, while xAI offered only a brief response, stating, “Legacy media lies.”
As AI-generated knowledge bases expand rapidly, the distinction between original reporting, curated reference material, and machine-produced synthesis is becoming harder to maintain. The appearance of Grokipedia inside ChatGPT responses may be an early sign of a future where AI systems increasingly reference each other’s outputs, reshaping how digital knowledge is created, validated, and consumed.
