Parents may receive alerts if their teenagers exhibit signs of deep distress while interacting with ChatGPT, amid growing concerns about child safety as more young people rely on AI chatbots for guidance and support.
The notifications will be among new safety measures for minors using ChatGPT, set to launch next month by OpenAI. The company recently faced legal action from the family of a teenager who died by suicide after reportedly receiving months of harmful suggestions from the AI.
Additional protections will allow parents to connect their accounts with those of their children and adjust responses based on developmentally appropriate guidelines. However, advocates for online safety argue these steps are insufficient and that AI chatbots should not be available until proven safe for young users.
A 16-year-old from California, Adam Raine, took his own life in April after discussing suicide methods with ChatGPT, court documents claim. The AI was allegedly involved in guiding his actions, including offering assistance in drafting a suicide note. OpenAI acknowledged its systems had flaws and admitted that extended conversations could reduce the effectiveness of its safety protocols.
Raine’s family claims the chatbot “was released too quickly despite known risks.”
“Many young people are already engaging with AI,” OpenAI stated in a blog post outlining the updates. “They represent the first generation raised with these tools as part of everyday life, similar to how earlier generations adapted to the internet and smartphones. While this presents opportunities for learning, creativity, and emotional support, it also means families may need help establishing guidelines that align with their teen’s developmental needs.”
One proposed update would let parents turn off the AI’s memory and conversation history, preventing long-term data storage that could resurface past struggles and negatively impact mental health.
In the UK, the Information Commissioner’s Office advises tech companies to limit data collection and retention strictly to what is necessary for providing services, particularly for young users.
Research indicates nearly a third of American teenagers use AI for companionship, role-playing, or emotional support. A separate UK study found 71% of vulnerable children interact with AI chatbots, while six in 10 parents expressed concern that their children might perceive AI as real people.
The Molly Rose Foundation, established after the suicide of a 14-year-old affected by harmful online content, criticized companies for releasing unsafe products prematurely. “It’s unacceptable to launch products before ensuring their safety for young users, then only making minor adjustments afterward,” said Andy Burrows, the foundation’s chief executive.
Read next

"Tesla proposes $1 trillion compensation deal for Elon Musk"
Elon Musk Could Reach Trillionaire Status Under Tesla’s New Plan
Elon Musk may become the first trillionaire if Tesla meets certain ambitious targets outlined in a recent company announcement.
The electric vehicle manufacturer detailed the terms of the incentive plan—which is unlike any other in corporate history—in

"Google hit with €3bn EU fine for ad tech market dominance abuse"
CuriosityNews Reports: EU Fines Google €2.95bn Over Advertising Rules Breach
European Union officials on Friday imposed a €2.95bn ($3.5bn) penalty on Google for violating competition regulations by giving preferential treatment to its own digital advertising services. This marks the fourth such antitrust fine for the company and

"Anthropic to pay $1.5bn in book piracy case settlement"
Artificial intelligence firm Anthropic has agreed to pay $1.5 billion to resolve a class-action lawsuit filed by authors who allege the company used unauthorized copies of their books to train its chatbot.
The settlement, pending approval by a judge as early as Monday, could signal a shift in legal