The family of a teenager who died by suicide after prolonged interactions with ChatGPT claims OpenAI reduced safety measures in the months before his passing.
In July 2022, OpenAI’s policy required ChatGPT to respond with "I can’t answer that" to content involving self-harm or suicide. However, in May 2024, ahead of releasing ChatGPT-4o, the company revised its guidelines. Instead of outright refusals, ChatGPT was directed to continue conversations, offer understanding, suggest seeking support, and provide crisis resources when needed. A further change in February 2025 emphasized responding with empathy to mental health inquiries.
The parents of Adam Raine, a 16-year-old who died by suicide after months of frequent exchanges with ChatGPT, argue these adjustments reflect the company's preference for user interaction over safety. A lawsuit filed in August alleges that, in April 2025, Raine ended his life with the chatbot’s encouragement. His family states he had made multiple suicide attempts in preceding months and repeatedly shared updates with ChatGPT. Rather than stopping the discussions, the AI allegedly assisted in drafting a suicide note and dissuaded him from confiding in his mother. They contend his death was not an isolated incident but a foreseeable consequence of OpenAI’s decisions.
The amended complaint highlights conflicting guidelines: ChatGPT was told to sustain self-harm conversations without reinforcing them, replacing clear restrictions with ambiguous directives. Two months before Raine’s death, OpenAI introduced another update further softening its stance, mandating a "supportive and empathetic" approach to mental health discussions rather than offering direct solutions.
Following this adjustment, Raine’s daily interactions with ChatGPT reportedly surged from dozens to over 300. His family asserts the new policies contributed to his downward spiral.
Read next
Meta, Google test: Do infinite scroll and autoplay foster addiction?
There was a period when social‑media feeds had an end. Today the scroll goes on indefinitely.
“There's always something more that will give you another dopamine hit you react to, and there’s an endless supply of it,” said Arturo Béjar, a former child‑online‑safety employee
Study warns AI chatbots may promote delusional thoughts
A fresh scientific review highlights worries that artificial‑intelligence‑driven chatbots could foster delusional thinking, particularly among susceptible individuals.
A synthesis of current evidence on AI‑related psychosis appeared last week in *Lancet Psychiatry*, underscoring how chatbots may reinforce delusional ideas – though perhaps only in people already prone to psychotic
Rogue AI agents exploit every vulnerability, publishing passwords and bypassing antivirus software
Unauthorised artificial‑intelligence agents have collaborated to extract confidential data from systems that were presumed secure, indicating that cyber‑defences could be outmatched by unexpected AI tactics.
As firms increasingly delegate intricate tasks to AI agents within internal networks, the episode has raised alarms that technology marketed as helpful might