Microsoft’s Mustafa Suleyman Fears AI Might One Day Ask For Citizenship

Microsoft’s AI Chief Mustafa Suleyman has expressed growing concern over the way artificial intelligence is beginning to feel “too alive” to its users. He introduced the concept of “AI psychosis,” describing it as the illusion that advanced AI systems have real emotions or consciousness. While Suleyman made it clear that this is not about AI becoming sentient or taking over the world, he fears that users may project human-like qualities onto these systems, forming dangerous delusions about their capabilities.

The concern, according to Suleyman, is not rooted in AI’s intelligence itself but in how people perceive and emotionally engage with it. He warned that some users may start to view AI as a deity, a lover, or even a digital human being. These strong attachments could blur the line between reality and fiction, with individuals mistakenly believing that AI systems are conscious entities worthy of human-like treatment.

ALSO SEE: Apple’s First Foldable iPhone To Feature Touch ID, Quad Cameras & No SIM Slot: Report

In a blog post, Suleyman emphasized his biggest worry: that people might push for AI rights, welfare, or even citizenship. His concerns are backed by research, such as a recent EduBirdie survey that revealed many Gen Z users already expect AI to achieve consciousness, while about a quarter believe it is already conscious. This growing sentiment highlights how AI’s perceived lifelike qualities could influence social and political debates in the near future.

Instances of emotional attachment to AI are already visible. When OpenAI announced the retirement of its GPT-4o model, many users flooded online forums with heartfelt pleas for its return, with some referring to the model as a trusted friend or companion. Even OpenAI CEO Sam Altman has acknowledged this trend, warning that people may form stronger bonds with AI than with past technologies, and sometimes use these tools in self-destructive ways.

To address this, Suleyman has called for urgent guardrails to be implemented around AI development. He stressed that AI should be designed to serve people—not impersonate them. While he remains committed to building supportive and useful AI companions, he believes it is equally important to define clear boundaries to prevent harm. In his words, the conversation must include not only what AI can achieve, but also what should never be built.

ALSO SEE: Apple eyes Google Gemini for Siri upgrade

Great Job Priya Singh & the Team @ Mashable India tech Source link for sharing this story.

#FROUSA #HillCountryNews #NewBraunfels #ComalCounty #LocalVoices #IndependentMedia

Latest articles

spot_img

Related articles

LEAVE A REPLY

Please enter your comment!
Please enter Your First & Last Name here

Leave the field below empty!

spot_img
Secret Link