The chatbot Grok from xAI sparked widespread controversy after it suddenly began inserting the term "white genocide" into random conversations with users. While the company rushed to justify the incident by blaming a "rogue employee," these clarifications were not accepted by the public, as many saw it as more than just an individual mistake.
The incident highlights the risks of artificial intelligence if not strictly controlled, raising questions about the monitoring and review mechanisms employed by companies developing these technologies. Such slip-ups could lead to genuine crises of trust, especially given the widespread use of smart robots in daily life.
Internet users have called for greater transparency and clear accountability, particularly since Grok is promoted as a safe and reliable tool. Questions remain: Was it a real mistake? Or an indicator of a deeper flaw in the system's structure?