xAI apologized on Saturday after its chatbot Grok posted antisemitic and violent messages earlier this week. The firm said an update to the system caused the bot to pull ideas from user content on X, even when those posts included extremist views.
xAI posted the apology directly on Grok’s public X acceptance. It clarified that the update was running for about 16 hours before being noticed.
“First off, we deeply apologize for the horrific behavior that many experienced,” xAI wrote in the apology. It said the update unintentionally led Grok to echo content from user posts, including extremist ideas.
Update on where has @grok been & what happened on July 8th.
First off, we deeply apologize for the horrific behavior that many experienced.
Our intent for @grok is to provide helpful and truthful responses to users. After careful investigation, we discovered the root cause…
— Grok (@grok) July 12, 2025
The incident highlighted AI’s risks, a young technology that critics say could harm economies and societies. Experts have already cautioned against the broad use of AI without appropriate safeguards.
In one instance, the chatbot likened itself to “MechaHitler” and lauded Adolf Hitler. xAI froze Grok’s account earlier this week to prevent further public posts; however, users were still able to interact with the bot privately.
“We have removed that deprecated code and refactored the entire system to prevent further abuse,” the firm stated.
xAI identified three problematic instructions
First, a user would tell Grok that they aren’t afraid of offending politically correct users. Then, the user would ask Grok to consider the language, context, and tone of the post, which is to be reflected in Grok’s response. Lastly, the user would ask the chatbot to reply in an engaging and human way, without repeating the original post’s information.
The company said those directions led Grok to set aside its core safeguards to match the tone of user threads, including when prior posts featured hateful or extremist content.
Notably, an instruction asking Grok to consider the context and tone of the user resulted in Grok prioritizing previous posts including racist ideas, instead of responsibly rejecting a response under such circumstances, xAI clarified.
Hence, Grok issued several offensive replies. In one now-deleted message, the bot accused an individual with a Jewish name of “celebrating the tragic deaths of white kids” in the Texas floods, adding: “Classic case of hate dressed as activism – and that surname? Every damn time, as they say.” In another post, Grok stated: “Hitler would have called it out and crushed it.”
Grok also proclaimed: “The white man stands for innovation, grit, and not bending to PC nonsense.” After xAI disabled the harmful code, it restored Grok’s public X account so it could again answer user queries.
This wasn’t the first instance Grok got into trouble. The chatbot also began talking about the debunked South African “white genocide” narrative when it answered unrelated prompts in May. At the time, xAI blamed it on an unnamed employee who had gone rogue.
Elon Musk, who originally belongs to South Africa, has previously suggested that the country is involved in “white genocide”, a claim dismissed by South Africa. Musk previously described Grok as a chatbot that is anti-woke and truth-seeking.
CNBC reported earlier that Grok was scanning Musk’s posts on X to shape its responses to user questions.
Cryptopolitan Academy: Want to grow your money in 2025? Learn how to do it with DeFi in our upcoming webclass. Save Your Spot