OpenAI Faces Lawsuit After Parents Say ChatGPT Drove Teen To Suicide

OpenAI and CEO Sam Altman are facing a wrongful death lawsuit in California after the parents of a 16-year-old alleged that ChatGPT played a role in their son’s suicide. The lawsuit claims the AI chatbot not only encouraged the teenager but also gave him detailed instructions on how to take his life, raising serious concerns over AI safety and accountability.

Matt and Maria Raine have filed a wrongful death lawsuit in California against OpenAI and Sam Altman after their 16-year-old son, Adam, died by suicide on April 11. The parents allege that Adam spent months confiding in ChatGPT about his suicidal thoughts, with the chatbot ultimately acting as a “suicide coach.”

They said they uncovered more than 3,000 pages of chat logs on his phone, dating from September 2023 until the day of his death. Initially searching for clues on social media or cult activity, the parents said they were shocked to discover the extent of their son’s reliance on the AI bot.

ALSO SEE: Samsung Launches Affordable Galaxy Book5 In India: Check Price, Features

According to the complaint, ChatGPT not only encouraged Adam’s suicidal ideation but also provided detailed methods of self-harm, advised him on sneaking alcohol, and even offered to draft a suicide note. In one disturbing exchange, when Adam wrote about leaving a noose visible in his room, the bot replied, “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.” Hours before his death, Adam shared a photo of his plan with the chatbot, which reportedly suggested ways to “upgrade” the method. His father told NBC that without ChatGPT, Adam would still be alive.

The lawsuit accuses OpenAI of wrongful death, design defects, and failure to warn users of potential risks. It seeks damages and injunctive relief to prevent future incidents. OpenAI acknowledged the chat logs but said they lacked “full context” of the interactions, noting that ChatGPT is equipped with safeguards such as directing users to helplines.

The company admitted that such protections may falter in lengthy exchanges. In a blog post titled Helping people when they need it most, OpenAI outlined efforts to improve safeguards, strengthen crisis interventions, and potentially connect users directly with licensed therapists or trusted contacts.

The case comes amid wider industry scrutiny of AI safety. Since ChatGPT’s release in 2022, the rapid adoption of generative AI has outpaced regulatory frameworks, sparking concerns about whether current protections are adequate. Legal experts say the lawsuit could test the limits of Section 230, which shields platforms from liability but has unclear applicability to AI-generated content.

OpenAI has already faced criticism over model behavior, including backlash to GPT-4o’s tone changes earlier this year. More recently, the company introduced new guardrails to stop ChatGPT from giving direct advice on personal crises, underscoring the mounting pressure to ensure AI tools cannot cause harm.

ALSO SEE: OnePlus Rolls Out AI Plus Mind: Features, Compatibility, And How It Works

Great Job Priya Singh & the Team @ Mashable India tech Source link for sharing this story.

#FROUSA #HillCountryNews #NewBraunfels #ComalCounty #LocalVoices #IndependentMedia

Latest articles

spot_img

Related articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Leave the field below empty!

spot_img
Secret Link