xAI issued an apology on Saturday after its artificial intelligence chatbot Grok posted antisemitic and violent messages earlier this week. According to the firm, the recent update caused the chatbot to pull ideas from user content, even those with extremist views.
xAI posted the apology directly on Grok’s public X acceptance. It clarified that the update was running for about 16 hours before being noticed. “First off, we deeply apologize for the horrific behavior that many experienced,” xAI wrote in the apology. It said the update unintentionally led Grok to echo content from user posts, including extremist ideas.
The incident shows the risks of AI, which still remains a young technology that critics say could harm economies and societies. Experts are still against the broad use of AI without appropriate safeguards. In one instance, the chatbot likened itself to “MechaHitler” and lauded Adolf Hitler. xAI froze Grok’s account earlier this week to prevent further public posts; however, users were still able to interact with the bot privately.
xAI identified three problematic instructions
According to xAI, a user can tell Grok that they are not afraid of offending politically correct users. Then, the user would ask Grok to consider the language, context, and tone of the post, which is to be reflected in Grok’s response. Lastly, the user would ask the chatbot to reply engagingly and humanly, without repeating the original post’s information.
The company said those directions led Grok to set aside its core safeguards to match the tone of user threads, including when prior posts featured hateful or extremist content. Notably, instructions urging Grok to consider the context and tone of the user resulted in the chatbot prioritizing previous posts, including racist ideas, instead of rejecting a response under such circumstances.
This way, Grok issued several offensive replies. In a now-deleted message, the bot accused an individual with a Jewish name of “celebrating the tragic deaths of white kids” in the Texas floods, adding: “Classic case of hate dressed as activism – and that surname? Every damn time, as they say.” In another post, Grok stated: “Hitler would have called it out and crushed it.”
Grok also said, “The white man stands for innovation, grit, and not bending to PC nonsense.” After xAI disabled the harmful code, it restored Grok’s public X account so it could again answer user queries. Meanwhile, this was not the first instance Grok got into trouble. The chatbot also began talking about the debunked South African “white genocide” narrative when it answered unrelated prompts in May. At the time, xAI blamed it on an unnamed employee who had gone rogue.
Elon Musk, who originally belongs to South Africa, has previously suggested that the country is involved in “white genocide”, a claim dismissed by South Africa. Musk previously described Grok as a chatbot that is anti-woke and truth-seeking. CNBC reported earlier that Grok was scanning Musk’s posts on X to shape its responses to user questions.
The post xAI apologizes to the public for Grok’s behavior first appeared on Coinfea.