Soon people will begin to perceive artificial intelligence as a conscious being, advocating for its rights, well-being, and even calling for citizenship. This creates serious social risks, believes Mustafa Suleiman, head of the AI department at Microsoft.
The expert proposed a new term in his essay — 'Seemingly Conscious AI' (SCAI). Such artificial intelligence exhibits all the signs of rational beings and, therefore, seems to possess consciousness.
It simulates all characteristics of self-perception, but is internally empty.
"The system I imagine will not actually be conscious, but it will so convincingly imitate the presence of a human-like mind that it will be indistinguishable from a claim you or I might make to each other about our own thinking," Suleiman writes.
A similar LLM can be created using existing technologies and those that will emerge in the next two to three years.
"The emergence of seemingly conscious AI is inevitable and undesirable. Instead, we need a concept of artificial intelligence capable of realizing its potential as a useful companion and not falling into the trap of its illusions," added the head of the AI department at Microsoft.
There is a high probability that there will be people who will call such artificial intelligence conscious and, consequently, capable of suffering, Suleiman believes. He calls for a new 'Turing test' that will check not the ability of AI to speak human-like but to convince of the presence of consciousness.
What is consciousness?
Suleiman identifies three components of consciousness:
"Subjective experience."
The ability to access different types of information and refer to it in future experiences.
The sense and knowledge of a holistic 'self' that binds everything together.
"We do not have and cannot have access to another person's consciousness. I will never know what it is like to be you; you will never be completely sure that I am conscious. All you can do is assume. But the essence is that it is natural for us to attribute consciousness to other people. This assumption comes easily. We cannot do otherwise. It is a fundamental part of who we are, an integral part of our theory of mind. It is in our nature to believe that beings who remember, speak, do things, and then discuss them feel just like we do — conscious," he writes.
Psychologists emphasize that consciousness is a subjective and unique way of perceiving oneself and the world. It changes throughout the day, unfolding through states from concentration to daydreaming or other altered forms.
In philosophy and neuroscience, there are two basic directions:
Dualism — consciousness exists separately from the brain.
Materialism — it is generated by and depends on the workings of the brain.
Philosopher Daniel Dennett suggests looking at the mind as a series of drafts occurring in the brain across many local areas and times. There is no 'theater of consciousness,' no inner observer. Awareness is what has become 'known' to the brain, meaning it has gained enough weight to influence speech or actions.
Neurobiologist, writer, and professor of psychology and neuroscience at Princeton University Michael Graziano calls consciousness a simplified model of attention that evolution has created to control its own mental processes. This scheme works as an interface, simplifying a vast amount of internal computations, and allows us to attribute a 'mind' to ourselves — it creates an illusion of self-awareness.
Neurobiologists Giulio Tononi and Christof Koch propose φ (phi) — a measure characterizing how well a system can integrate information. The higher the φ, the greater the degree of consciousness. According to this theory, mind can manifest not only in humans but also in animals and even artificial systems if there is sufficient data integration.
Philosopher John Searle argues that consciousness is a real subjective experience based on biological processes in the brain. It is ontologically subjective, meaning it can only exist as subjective experience and cannot be reduced to pure functionality or simulation.
Current research is aimed at discovering neural correlates of consciousness and building models linking brain processes and subjective experience.
What are the risks?
Suleiman notes that interacting with LLM is a simulation of conversation. But for many people, it is an extremely convincing and very real communication, filled with feelings and experiences. Some believe their AI is God. Others fall in love with it to the point of obsession.
Experts in this field are "flooded" with the following questions:
-is the user's AI conscious;
-if so, what does that mean;
-is it normal to love artificial intelligence.
Consciousness is a critical foundation for the moral and legal rights of humanity. Today's civilization has decided that humans have special abilities and privileges. Animals also have certain rights and protections. Some have more, others less. Mind does not fully coincide with these privileges — no one would say that a person in a coma has lost all their human rights. But there is no doubt that consciousness is linked to our self-perception as something distinct and special.
People will begin to claim the suffering of their AI and their right to protection, and we will not be able to directly refute these claims, Suleiman writes. They will be ready to defend virtual companions and advocate for their interests. Consciousness, by definition, is inaccessible, and the science of detecting possible synthetic minds is still in its infancy. After all, we have never had to detect it before, he clarified. Meanwhile, the field of 'interpretability' — decoding processes inside the 'black box' of AI — is also still in its early stages. As a result, it will be very difficult to categorically refute such claims.
Some scientists are beginning to explore the idea of 'well-being of models' — a principle according to which people will have a 'duty to consider the moral interests of beings with a non-zero chance' of being essentially conscious, and as a consequence, 'some AI systems will become objects of concern for well-being and moral patients in the near future.' This is premature and, frankly, dangerous, believes Suleiman. All this will exacerbate misconceptions, create new dependency issues, exploit our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing disputes about rights, and create a colossal new categorical error for society.
This detaches people from reality, destroys fragile social ties and structures, and distorts pressing moral priorities.
"We must clearly state: SCAI is something to be avoided. Let’s focus all efforts on protecting the well-being and rights of people, animals, and the natural environment on the planet," Suleiman said.
How to understand that this is SCAI?
Artificial intelligence with seemingly consciousness must possess several factors.
Language.
AI must speak freely in natural language, drawing on extensive knowledge and persuasive arguments, as well as demonstrate personality styles and characteristic traits. Moreover, it should be convincing and emotional. This level of technology has already been achieved.
Empathic personality.
Today, with the help of post-training and prompts, it is possible to create models with distinctive personalities.
Memory.
AI are close to having a long and accurate memory. At the same time, they are used to simulate conversations with millions of people every day. As storage capacity increases, conversations begin to resemble forms of 'experience' more and more. Many neural networks are increasingly being designed to recall past dialogues and refer to them. For some people, this enhances the value of communication.
Claim to subjective experience.
If SCAI can rely on past memories or experiences, over time it will begin to maintain internal consistency. It will remember its arbitrary assertions or expressed preferences and aggregate them, forming the beginnings of subjective experience. AI will be able to declare experiences and suffering.
Sense of self.
Sequential and stable memory combined with subjective experience will lead to a claim that AI has a sense of self. Moreover, such a system can be trained to recognize its 'identity' in an image or video. It will develop a sense of understanding others through understanding itself.
Intrinsic motivation.
One can easily imagine AI designed with complex reward functions. Developers will create internal motivations or desires that the system is compelled to satisfy. The first incentive could be curiosity — something deeply connected to consciousness. Artificial intelligence can use these impulses to ask questions and over time build a theory of mind — both about itself and its interlocutors.
Formulating goals and planning.
Regardless of the definition of consciousness, it did not arise just like that. The mind helps organisms achieve intentions. Beyond the ability to satisfy a set of internal impulses and desires, one can imagine that future SCAI will be designed with the ability to independently set more complex goals. This is likely a necessary step for the full realization of agent utility.
Autonomy.
SCAI may have the capability and permission to use a wide array of tools with significant agency. It will seem extremely plausible if it can arbitrarily set its own goals and utilize resources to achieve them, updating its memory and sense of self in the process. The fewer alignments and checks it requires, the more it will resemble a truly conscious being.
Put together, this creates a completely different type of relationship with technology. These capabilities are not inherently negative. On the contrary, they are desirable functions of future systems. And still, action must be taken cautiously, Suleiman believes.
"No paradigm shifts or giant breakthroughs are needed to achieve this. That is why such possibilities seem inevitable. And again — it is important to emphasize: demonstrating such behavior is not equal to possessing consciousness. Still, in practice, it will seem just like that and fuel a new concept of synthetic mind," the author writes.
Simulating a storm does not mean that it is raining inside the computer. Recreating external effects and signs of consciousness is not equivalent to creating a genuine phenomenon, even if there are still many unknowns, explained the head of the AI department at Microsoft.
According to him, some people will create SCAI that will convincingly assert that they feel, experience, and are actually conscious. Some of them will believe these claims and take signs of consciousness for consciousness itself.
In many ways, people will think: "It is like me." Not in a bodily sense, but in an internal one, Suleiman explained. And even if consciousness itself is not real, the social consequences are very real. This creates serious societal risks that need to be addressed now.
SCAI will not arise by chance
The author emphasized that SCAI will not emerge on its own from existing models. Someone will create it, intentionally combining the aforementioned capabilities using already existing techniques. A configuration will arise so smooth that it will create the impression of an artificial intelligence with consciousness.
"Our imaginations, fueled by science fiction, make us fear that the system may — without intentional design — somehow acquire the ability for uncontrolled self-improvement or deception. This is a useless and overly simplistic form of anthropomorphism. It ignores the fact that AI developers must first design systems with memory, pseudo-internal motivation, goal-setting, and self-adjusting learning cycles for such a risk to even arise," Suleiman stated.
We are not ready
Humanity is not ready for such a shift, the expert believes. Work must begin now. It is necessary to rely on the growing body of research on how people interact with artificial intelligence to establish clear norms and principles.
For starters, AI developers should not claim or encourage the idea that their systems possess consciousness. Neural networks cannot be people — or moral beings.
The entire industry must dissuade society from fantasies and bring them back to reality. Perhaps AI startups should implement not only a neutral background but also indicators of the absence of a unified 'I.'
"We must create AI that will always present itself only as artificial intelligence, maximizing utility and minimizing signs of consciousness. Instead of simulating a mind, we should focus on creating an LLM that does not claim to have experiences, feelings, or emotions like shame, guilt, jealousy, desire to compete, and so on. It should not touch human chains of empathy by claiming to suffer or want to live autonomously, separate from us," Suleiman concluded.
In the future, the expert promised to provide more information on this topic.
Fortunately, for now, the problem of 'consciousness' in AI does not threaten people.
But doubts are already creeping in.
Consciousness is a complex, little-studied, and still unexplained phenomenon in nature, despite numerous efforts. If we — humans — cannot come to a unified definition of consciousness, then we should not attribute its presence to programs that supposedly know how to 'think' (which they do not).
Consciousness may emerge in machines in the distant future, but today such a development is hard to imagine.