Microsoft’s AI Chief Mustafa Suleyman has expressed growing concern over the way artificial intelligence is beginning to feel “too alive” to its users. He introduced the concept of “AI psychosis,” describing it as the illusion that advanced AI systems have real emotions or consciousness. While Suleyman made it clear that this is not about AI becoming sentient or taking over the world, he fears that users may project human-like qualities onto these systems, forming dangerous delusions about their capabilities.
ICYMI: Microsoft AI CEO Mustafa Suleyman, 41, warned in a personal essay published earlier this week that AI could one day appear to be conscious, posing a danger to society. https://t.co/IO8VYK5sOL
— Entrepreneur (@Entrepreneur) August 24, 2025
The concern, according to Suleyman, is not rooted in AI’s intelligence itself but in how people perceive and emotionally engage with it. He warned that some users may start to view AI as a deity, a lover, or even a digital human being. These strong attachments could blur the line between reality and fiction, with individuals mistakenly believing that AI systems are conscious entities worthy of human-like treatment.
ALSO SEE: Apple’s First Foldable iPhone To Feature Touch ID, Quad Cameras & No SIM Slot: Report
In a blog post, Suleyman emphasized his biggest worry: that people might push for AI rights, welfare, or even citizenship. His concerns are backed by research, such as a recent EduBirdie survey that revealed many Gen Z users already expect AI to achieve consciousness, while about a quarter believe it is already conscious. This growing sentiment highlights how AI’s perceived lifelike qualities could influence social and political debates in the near future.
Microsoft warns of rising “AI psychosis” cases
According to PandoraTech News, Microsoft’s AI chief Mustafa Suleyman warned of a growing number of “AI psychosis” cases, where users overly rely on ChatGPT, Claude, or Grok, even believing AI grants them spiritual powers. Experts… pic.twitter.com/5SBEz0l7cA
— PandoraTech (@impandoratech) August 23, 2025
Instances of emotional attachment to AI are already visible. When OpenAI announced the retirement of its GPT-4o model, many users flooded online forums with heartfelt pleas for its return, with some referring to the model as a trusted friend or companion. Even OpenAI CEO Sam Altman has acknowledged this trend, warning that people may form stronger bonds with AI than with past technologies, and sometimes use these tools in self-destructive ways.
To address this, Suleyman has called for urgent guardrails to be implemented around AI development. He stressed that AI should be designed to serve people—not impersonate them. While he remains committed to building supportive and useful AI companions, he believes it is equally important to define clear boundaries to prevent harm. In his words, the conversation must include not only what AI can achieve, but also what should never be built.
Great Job Priya Singh & the Team @ Mashable India tech Source link for sharing this story.