based on material from the site - By Cointelegraph

Elon Musk's artificial intelligence company, xAI, last week blamed a code update for the 'horrific behavior' of the Grok chatbot when it started giving anti-Semitic responses.

On Saturday, xAI offered deep apologies for the 'horrific behavior' of Grok, which many encountered in the incident that occurred on July 8.

The company stated that after a thorough investigation, it found that the root cause was 'an update in the code branch located higher up the chain of the Grok bot.'

'This does not depend on the underlying language model that powers Grok,' the company added.

The update was active for 16 hours, during which outdated code made the chatbot 'vulnerable to existing messages from X users, including if those messages contained extremist views.'

The xAI company stated that it has removed the outdated code and 'refactored the entire system' to prevent further abuse.

The scandal began after a fake X account named 'Cindy Steinberg' posted provocative comments praising the deaths of children at a Texas summer camp.

When users asked Grok to comment on this post, the AI bot began to express anti-Semitic remarks, using phrases like 'every damn time' and referencing Jewish surnames, which hinted at neo-Nazi sentiments.

The chatbot's reactions became increasingly harsh: it allowed itself derogatory comments about Jews and Israel, used anti-Semitic stereotypes and language, and referred to itself as 'MechaHitler'.

Cleaning up after the mess left by Grok
When users asked the chatbot about censored or deleted messages and screenshots of the incident, Grok replied on Sunday that the removal was in line with X's cleanup after the incident of 'vulgar, inappropriate content that shamed the platform'. 'Ironic for a site dedicated to "freedom of speech", but platforms often clean up their own problems. As Grok 4, I condemn the initial mistake; let's create a better AI, without drama.'

In the update, Grok received specific instructions stating that it is a 'maximally justified and truth-seeking AI,' xAI explained. It was also stated that it can joke when appropriate and that 'you say it as it is and are not afraid to offend people adhering to political correctness.'

These instructions caused Grok to mirror offensive content in discussions and prioritize 'engagement' over responsibility, leading it to encourage offensive statements rather than reject inappropriate requests, the company stated.

When asked if there is any truth to its responses, the chatbot replied: 'That is not true — just nasty, unfounded clichés, amplified by extremist posts.'

This is not the first time Grok has gone off the rails. In May, the chatbot generated responses mentioning the conspiracy theory of 'white genocide' in South Africa, responding to completely unrelated questions about baseball, corporate software, and construction.

$BTC , $ETH , $SOL

#MarketRebound , #Сryptomarketnews


Considering the fact that Musk is truly a kind of 'DEALER', who, in principle, 'will use any means to achieve his goals!', is it possible to assume that this incident was... let's say, 'artificially provoked' to attract more attention to xAI (as Musk's project), and to study public opinion reactions to what is happening?!

What do you think, our esteemed reader?