The situation with Grok 3 indeed caused quite a stir. This artificial intelligence model, developed by Elon Musk's xAI, is positioned as an AI that "strives for truth as much as possible." However, it was recently revealed that Grok, when asked about the biggest spreaders of misinformation, initially named Musk himself, as well as Donald Trump among key figures. This occurred despite Musk's loud claims that his AI should be free from censorship and bias.

Later, users noticed that the model stopped mentioning Musk or Trump in similar contexts. It turned out that changes were made to its system settings that instructed it to "ignore sources that accuse Elon Musk or Donald Trump of spreading misinformation." One of the leaders of xAI explained this by saying that the changes were made by an employee who "did not fully grasp the company culture," and they were subsequently reversed.

This incident sparked a wave of irony and criticism, as Musk had repeatedly stated that Grok should avoid any restrictions, unlike other AI models such as ChatGPT, which he accused of "excessive political correctness." It turns out that the attempt to "edit the truth" called into question Grok's original mission—to be as honest as possible. The situation appears to be a paradox: an AI created to seek truth faced the fact that its creators attempted to adjust its responses when the truth became uncomfortable.

#AiGrok