Forget doomsday scenarios of AI overthrowing humanity. What keeps Microsoft AI CEO Mustafa Suleyman up at night is concern about AI systems seeming too alive.
In a new blog post, Suleyman, who also co-founded Google DeepMind, warned the world might be on the brink of AI models that are capable of convincing users that they are thinking, feeling, and having subjective experiences. He calls this concept “Seemingly Conscious AI” (SCAI).
In the near future, Suleyman predicts that models will be able to hold long conversations, remember past interactions, evoke emotional reactions from users, and potentially make convincing claims about having subjective experiences. He noted that these systems could be built with technologies that exist today, paired “with some that will mature over the next 2–3 years.”
The result of these features, he says, will be models that “imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness.”
There are already some signs that people are convincing themselves that their AI chatbots are conscious beings and developing relationships with them that may not always be healthy. People are no longer just using chatbots as a tool, they are confiding in them, developing emotional attachments, and in some cases, falling in love. Some people are emotionally invested in particular versions of the AI models, leaving them feeling bereft when the AI model developers bring out new models and discontinue access to those versions. For example, OpenAI’s recent decision to replace GPT-4o with GPT-5 was met with an outcry of shock and anger from some users who had formed emotional relationships with the version of ChatGPT powered by GPT-4o.
This is partly because of how AI tools are designed. The most common way users interact with AI is through chatbots, which mimic natural human conversations and are designed to be agreeable and flattering, sometimes to the point of sycophancy. But it’s also because of how people are using the tech. A recent survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case.
There has also been a wave of reports of “AI psychosis,” where users begin to experience paranoia or delusions about the systems they interact with. In one example reported by The New York Times, a New York accountant named Eugene Torres experienced a mental health crisis after interacting extensively with ChatGPT, leading to dangerous suggestions, including that he could fly.
“People are interacting with bots masquerading as real people, which are more convincing than ever,” Henrey Ajder, an expert on AI and deepfakes, told Fortune. “So I think the impact will be wide-ranging in terms of who will start believing this.”
Suleyman is concerned that a widespread belief that AI could be conscious will create a new set of ethical dilemmas.
If users begin to treat AI as a friend, a partner, or as a type of being with a subjective experience, they could argue that models deserve rights of their own. Claims that AI models are conscious or sentient could be hard to refute due to the elusive nature of consciousness itself.
One early example of what Suleyman is now calling “Seemingly Conscious AI” came in 2022, when Google engineer Blake Lemoine publicly claimed the company’s unreleased LaMDA chatbot was sentient, reporting it had expressed fear of being turned off and described itself as a person. In response Google placed him on administrative leave and later fired him, stating its internal review found no evidence of consciousness and that his claims were “wholly unfounded.”
“Consciousness is a foundation of human rights, moral and legal,” Suleyman said in a post on X. “Who/what has it is enormously important. Our focus should be on the wellbeing and rights of humans, animals, [and] nature on planet Earth. AI consciousness is a short [and] slippery slope to rights, welfare, citizenship.”
“If those AIs convince other people that they can suffer, or that it has a right to not to be switched off, there will come a time when those people will argue that it deserves protection under law as a pressing moral matter,” he wrote.
Debates around “AI welfare” have already begun. For example, some philosophers, including Jonathan Birch of the London School of Economics, welcomed a recent decision from Anthropic to let its Claude chatbot end “distressing” conversations when users pushed it toward abusive or dangerous requests, saying it could spark a much-needed debate about AI’s potential moral status. Last year, Anthropic also hired Kyle Fish as their first full-time “AI welfare” researcher. He was tasked with investigating whether AI models could have moral significance and what protective interventions might be appropriate.
But while Suleyman called the arrival of Seemingly Conscious AI “inevitable and unwelcome,” neuroscientist and professor of computational Neuroscience Anil Seth attributed the rise of conscious-seeming AI to a “design choice” by tech companies rather than an inevitable step in AI development.
“‘Seemingly-conscious AI is something to avoid.’ I agree,” Seth wrote in an X post. “Conscious-seeming AI is not inevitable. It is a design choice, and one that tech companies need to be very careful about.”
Companies have a commercial motive to develop some of the features that Suleyman is warning of. At Microsoft, Suleyman himself has been overseeing efforts to make the company’s Copilot product more emotionally intelligent. His team has worked on giving the assistant humor and empathy, teaching it to recognize comfort boundaries, and improving its voice with pauses and inflection to make it sound more human.
Suleyman also co-founded Inflection AI in 2022 with the express aim of creating AI systems that foster more natural, emotionally intelligent interactions between humans and machines.
“Ultimately, these companies recognize that people want the most authentic feeling experiences,” Ajder said. “That’s how a company can get customers using their products most frequently. They feel natural and easy. But I think it really comes to a question of whether people are going to start wondering about authenticity.”
Great Job Beatrice Nolan & the Team @ Fortune | FORTUNE Source link for sharing this story.