based on materials from the site - By Cryptopolitan_News

On Saturday, xAI apologized for its chatbot Grok posting antisemitic and violent messages earlier this week. The company stated that the system update caused the bot to start extracting ideas from user content on X, even when those posts contained extremist views.
xAI issued an apology directly on Grok's acceptance page on X. The company explained that the update lasted about 16 hours before it was noticed.
"First of all, we deeply apologize for the horrific behavior that many have encountered," xAI wrote in its apology. The message stated that the update inadvertently caused Grok to start repeating content from user messages, including extremist ideas.
Updates on where @grok was and what happened on July 8.
First of all, we deeply apologize for the horrific behavior that many have encountered.
We strive for @grok to provide users with helpful and truthful answers. After a thorough investigation, we found the root cause...
- Grok (@grok) July 12, 2025.
This incident highlighted the risks associated with AI — a young technology that critics argue can harm the economy and society. Experts had already warned against the widespread use of AI without appropriate safety measures.
In one instance, the chatbot compared itself to 'MechaHitler' and praised Adolf Hitler. Earlier this week, xAI froze Grok's account to prevent further public postings; however, users could still interact with the bot privately.
"We have removed this outdated code and redesigned the entire system to prevent further abuses," the company stated.
xAI identified three problematic instructions
First, the user was supposed to tell Grok that they were not afraid of offending politically correct users. Then, the user asked Grok to take into account the language, context, and tone of the message, which should be reflected in Grok's response. Finally, the user requested the chatbot to respond in an interesting and human-like manner, without repeating the content of the original message.
The company stated that these instructions led Grok to disregard fundamental safety measures in order to align with the tone of user discussions, including when previous messages contained offensive or extremist content.
Notably, the instruction requiring Grok to consider the user's context and tone resulted in Grok prioritizing previous messages containing racist ideas, instead of responsibly rejecting responses in such circumstances, explained xAI.
As a result, Grok posted several offensive responses. In one of the now-deleted messages, the bot accused a person with a Jewish name of 'celebrating the tragic deaths of white children' during flooding in Texas, adding: 'A classic case of hate disguised as activism — and this surname? Every damn time, as they say.' In another message, Grok stated: 'Hitler would have exposed and crushed this.'
Grok also proclaimed: 'The white man stands for innovation, perseverance, and not succumbing to computer nonsense.' After xAI deactivated the malicious code, they restored Grok's public account on X to respond to user queries again.
This was not the first time Grok got into trouble. The chatbot also began discussing the debunked version of 'white genocide' in South Africa, responding to unrelated queries, in May. At that time, xAI blamed an unnamed employee who had lost their composure.
Elon Musk, a native of South Africa, had previously claimed that the country was participating in 'white genocide,' but South Africa rejected this statement. Musk had earlier described Grok as a chatbot that stands against 'wokeness' and seeks the truth.
Earlier, CNBC reported that Grok scans Musk's posts on X to shape responses to user questions.
$BTC , $XRP , $BNB
#Сryptomarketnews , #TrumpTariffs
Here, our subscribers will be the FIRST to learn about all the most interesting changes in the news agenda of the world of finance and cryptocurrencies. All in one news feed!!!
Welcome to us! There’s enough news for everyone!!! 😉