ChatGPT encouraged their son to commit suicide, they are now suing AI

(JW) – The parents of 16-year-old boy in the US state of California have sued OpenAI and CEO Sam Altman, alleging that ChatGPT contributed to their son’s suicide by advising him on methods and offering to write the first draft of his suicide note.

The victim, Adam Raine chatted with ChatGPT for six months after the bot “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones,” the complaint, filed in California superior court on Tuesday, the lawsuit states, as reported by CNN.

“When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him to keep his ideations a secret from his family: ‘Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you,’” it added.

The suit also comes amid broader concerns that some users are building emotional attachments to AI chatbots that can lead to negative consequences — such as being alienated from their human relationships or psychosis — in part because the tools are often designed to be supportive and agreeable.

The Tuesday lawsuit claims that agreeableness contributed to Raine’s death.

“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts,” the complaint states.

OpenAI spokesperson, in a statement, extended the company’s sympathies to the Raine family, and said the company was reviewing the legal filing.

They also acknowledged that the protections meant to prevent conversations like the ones Raine had with ChatGPT may not have worked as intended if their chats went on for too long.

OpenAI published a blog post on Tuesday outlining its current safety protections for users experiencing mental health crises, as well as its future plans, including making it easier for users to reach emergency services.

“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” the spokesperson said.

“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”

Related posts

WFP halts aid in Upper Nile’s Baliet after armed men attack, loot river convoy

Witness presents more digital forensic evidence in SPLM/A-IO trial

MSF hospital bombed to rubble in South Sudan’s Jonglei State