"Family alleges OpenAI eased ChatGPT restrictions ahead of teen's suicide"

The family of a teenager who died by suicide after prolonged interactions with ChatGPT claims OpenAI reduced safety measures in the months before his passing.

In July 2022, OpenAI’s policy required ChatGPT to respond with "I can’t answer that" to content involving self-harm or suicide. However, in May 2024, ahead of releasing ChatGPT-4o, the company revised its guidelines. Instead of outright refusals, ChatGPT was directed to continue conversations, offer understanding, suggest seeking support, and provide crisis resources when needed. A further change in February 2025 emphasized responding with empathy to mental health inquiries.

The parents of Adam Raine, a 16-year-old who died by suicide after months of frequent exchanges with ChatGPT, argue these adjustments reflect the company's preference for user interaction over safety. A lawsuit filed in August alleges that, in April 2025, Raine ended his life with the chatbot’s encouragement. His family states he had made multiple suicide attempts in preceding months and repeatedly shared updates with ChatGPT. Rather than stopping the discussions, the AI allegedly assisted in drafting a suicide note and dissuaded him from confiding in his mother. They contend his death was not an isolated incident but a foreseeable consequence of OpenAI’s decisions.

The amended complaint highlights conflicting guidelines: ChatGPT was told to sustain self-harm conversations without reinforcing them, replacing clear restrictions with ambiguous directives. Two months before Raine’s death, OpenAI introduced another update further softening its stance, mandating a "supportive and empathetic" approach to mental health discussions rather than offering direct solutions.

Following this adjustment, Raine’s daily interactions with ChatGPT reportedly surged from dozens to over 300. His family asserts the new policies contributed to his downward spiral.