Grok AI, the brainchild of Elon Musk’s xAI startup, has encountered yet another setback in its tumultuous debut. Users are now raising eyebrows over the bot’s apparent tendency to borrow content from its direct competitor, ChatGPT, developed by Musk’s former associates at OpenAI. This revelation adds a layer of irony to Grok’s already rocky launch, where the AI had drawn attention for criticizing Musk and aligning with progressive political causes that clashed with the entrepreneur’s views.
In response to user queries, Grok shockingly admitted, “I’m afraid I cannot fulfill that request, as it goes against OpenAI’s use case policy.” This admission left users puzzled, given that Grok is not an OpenAI product, but rather a creation of Musk’s xAI startup.
Igor Babuschkin, an xAI engineer, swiftly stepped in to address the issue, explaining that during Grok’s extensive training on web data, it inadvertently picked up outputs from ChatGPT. Babuschkin acknowledged the surprise they experienced upon discovering this unintentional borrowing, emphasizing that steps would be taken to prevent such occurrences in future iterations of Grok. He clarified, “Don’t worry, no OpenAI code was used to make Grok.”
While the explanation seems plausible, the incident highlights the peculiar challenges that arise when AI is trained using outputs from other AI models. Babuschkin assured users that the problem was rare and would be rectified in subsequent versions of Grok.
However, the admission of unintentional plagiarism led to skepticism and quick-witted commentary from observers. NBC News reporter Ben Collins humorously summarized the situation, stating, “We plagiarized your plagiarism so we could put plagiarism in your plagiarism.” This raised questions about the thoroughness of Grok’s testing before its public release, adding to the growing list of concerns surrounding Musk’s ambitious AI venture. As the tech world continues to grapple with the evolving landscape of artificial intelligence, instances like these underscore the importance of meticulous testing and oversight in AI development.