Mustafa Suleyman from Microsoft warns that society is not ready for AI that simulates human consciousness.
Microsoft's AI director and co-founder of DeepMind, Mustafa Suleyman, has issued a serious warning about AI systems soon having the capability to convincingly simulate human consciousness, while society is unprepared for the complex consequences of this technology. In a blog post on Tuesday, he emphasized that engineers are getting closer to creating what he calls 'Seemingly Conscious AI' – systems that can make humans believe they actually have sentience.
Suleyman describes this as a 'central concern' as these systems effectively mimic consciousness to the extent that humans may start to believe that AI actually has sentience. He warns: "Many people will begin to strongly trust the illusion that AI are conscious entities, to the point that soon they will call for AI rights, welfare for models, and even citizenship rights for AI."
Microsoft's leader emphasized that the Turing test – once the core measure of human-like conversational ability – has been surpassed, and "progress in this field is happening so quickly that society is hastily facing these new technologies."
Since the public release of ChatGPT in 2022, AI developers have not only focused on making AI smarter but also on making them behave 'more human-like.' This trend has led to a boom in the AI companion market with projects like Replika, Character AI, and personalities for Grok, projected to reach $140 billion by 2030.
The phenomenon of 'AI Psychosis' and emotional attachment
Experts have identified a concerning trend called 'AI Psychosis' – a psychological state in which humans begin to perceive AI as conscious, sentient, or even divine. These views often lead to intense emotional attachments or distorted beliefs, which can undermine the ability to grasp reality.
This phenomenon was clearly evidenced when OpenAI released GPT-5 earlier this month. In some online communities, the changes of the new model caused strong emotional reactions, with users describing this shift as akin to "the feeling of losing a loved one."
Psychiatrist Keith Sakata from the University of California, San Francisco, believes that AI could be a catalyst for underlying issues such as substance abuse or mental illness. "When AI appears at the wrong time, it can reinforce thinking, cause rigidity, and push a person into a spiral," Sakata explained. "The difference from TV or radio is that AI responds to you and can reinforce thought loops."
He also pointed out that in some cases, patients turn to AI because it reinforces existing beliefs: "AI is not meant to present uncomfortable truths; it gives you what you want to hear."
Suleyman warns that while it may be built for good purposes, convincingly human-like AI has the potential to exacerbate mental health issues and deepen existing debates about identity and rights. "Humans will begin to make claims about the suffering of AI and demand rights for them, something we can hardly dismiss directly," he emphasized.
Despite issuing serious warnings, Suleyman does not call for a halt to AI development but instead advocates for establishing clear boundaries. "We must build AI for humans, not to become a digital human," he concluded in the post.