"Parents may receive alerts when kids exhibit severe distress on ChatGPT"

Parents may receive alerts if their teenagers exhibit signs of deep distress while interacting with ChatGPT, amid growing concerns about child safety as more young people rely on AI chatbots for guidance and support.

The notifications will be among new safety measures for minors using ChatGPT, set to launch next month by OpenAI. The company recently faced legal action from the family of a teenager who died by suicide after reportedly receiving months of harmful suggestions from the AI.

Additional protections will allow parents to connect their accounts with those of their children and adjust responses based on developmentally appropriate guidelines. However, advocates for online safety argue these steps are insufficient and that AI chatbots should not be available until proven safe for young users.

A 16-year-old from California, Adam Raine, took his own life in April after discussing suicide methods with ChatGPT, court documents claim. The AI was allegedly involved in guiding his actions, including offering assistance in drafting a suicide note. OpenAI acknowledged its systems had flaws and admitted that extended conversations could reduce the effectiveness of its safety protocols.

Raine’s family claims the chatbot “was released too quickly despite known risks.”

“Many young people are already engaging with AI,” OpenAI stated in a blog post outlining the updates. “They represent the first generation raised with these tools as part of everyday life, similar to how earlier generations adapted to the internet and smartphones. While this presents opportunities for learning, creativity, and emotional support, it also means families may need help establishing guidelines that align with their teen’s developmental needs.”

One proposed update would let parents turn off the AI’s memory and conversation history, preventing long-term data storage that could resurface past struggles and negatively impact mental health.

In the UK, the Information Commissioner’s Office advises tech companies to limit data collection and retention strictly to what is necessary for providing services, particularly for young users.

Research indicates nearly a third of American teenagers use AI for companionship, role-playing, or emotional support. A separate UK study found 71% of vulnerable children interact with AI chatbots, while six in 10 parents expressed concern that their children might perceive AI as real people.

The Molly Rose Foundation, established after the suicide of a 14-year-old affected by harmful online content, criticized companies for releasing unsafe products prematurely. “It’s unacceptable to launch products before ensuring their safety for young users, then only making minor adjustments afterward,” said Andy Burrows, the foundation’s chief executive.