According to Cointelegraph, Elon Musk has announced that his artificial intelligence company, xAI, will undertake a significant retraining of its AI model, Grok, using a new knowledge base devoid of "garbage" and "uncorrected data." Musk revealed in a post on X that the forthcoming Grok 3.5 model will possess "advanced reasoning" capabilities and aims to "rewrite the entire corpus of human knowledge," by adding missing information and eliminating errors. He emphasized the necessity of this approach, citing the prevalence of "far too much garbage" in existing foundation models trained on uncorrected data.
Musk has consistently criticized rival AI models, such as OpenAI's ChatGPT, for being biased and omitting politically incorrect information. His vision for Grok is to create an "anti-woke" model, free from what he perceives as damaging political correctness. This aligns with his previous actions, such as relaxing content moderation on Twitter after acquiring the platform in 2022, which led to an influx of unchecked conspiracy theories, extremist content, and fake news. To combat misinformation, Musk introduced the "Community Notes" feature, enabling X users to provide context or debunk posts prominently displayed under offending content.
Musk's announcement has sparked criticism from various quarters. Gary Marcus, an AI startup founder and professor emeritus at New York University, expressed concern over Musk's plan, likening it to a dystopian scenario reminiscent of George Orwell's "1984." Marcus criticized the idea of rewriting history to align with personal beliefs, suggesting it represents a dangerous precedent. Bernardino Sassoli de’ Bianchi, a professor at the University of Milan, echoed these sentiments, warning against the manipulation of historical narratives by powerful individuals. He argued that altering training data to fit ideological perspectives undermines innovation and constitutes narrative control.
In his efforts to reshape Grok, Musk has encouraged X users to contribute "divisive facts" for training the bot, specifying that these should be "politically incorrect, but nonetheless factually true." This call has resulted in a flood of conspiracy theories and debunked extremist claims, including Holocaust distortion, vaccine misinformation, racist pseudoscientific assertions about intelligence, and climate change denial. Critics argue that Musk's approach risks amplifying falsehoods and conspiracy theories under the guise of seeking factual accuracy.