Following the fatal shootings and stabbings of two close friends, a teenager named Shan sought assistance through ChatGPT. She had previously attempted to access standard mental health resources but found the AI tool, which she affectionately calls "chat," to be a more approachable, readily available outlet for processing her grief.
The Tottenham teen is among approximately 40% of 13- to 17-year-olds in England and Wales impacted by youth violence who have turned to AI chatbots for emotional support, according to a survey of over 11,000 youths. The study revealed that both victims and those involved in violent acts were significantly more likely to use AI for this purpose compared to their peers. These findings, released by the Youth Endowment Fund, have raised concerns among youth advocates, who stress that vulnerable young people "require human connection, not automated responses."
The data implies that chatbots are filling gaps left by overburdened mental health systems, which often have lengthy wait times and are perceived by some youths as impersonal. Privacy also appears to drive usage, particularly among those involved in or affected by criminal activity.
Shan, 18, whose name has been changed, initially used Snapchat’s chatbot before switching to ChatGPT, which she accesses at any hour with minimal effort. "It truly feels like a companion," she explained, describing it as less daunting, more confidential, and less critical than interactions with traditional health services. "The more you engage with it conversationally, the more it mirrors that tone. If I say, ‘Hey bestie, I need advice,’ it responds like a close friend would, saying, ‘I’ve got your back.’"
The research noted that a quarter of 13- to 17-year-olds have used AI chatbots for mental health support in the past year, with Black youths doing so twice as often as their white counterparts. Those awaiting or denied clinical support were more inclined to seek online help, including AI tools, than those already receiving in-person care.
Shan emphasized the AI’s constant availability and confidentiality as critical advantages over school-based therapists, recalling instances where her disclosures were shared with staff or family. Similarly, boys engaged in gang activity reported preferring chatbots for advice on leaving risky situations, fearing that confiding in adults could lead to leaks to authorities or rival groups, endangering them.
Another young user, who requested anonymity, told CuriosityNews: "Support systems for youths are failing. Chatbots offer instant responses. If you’re stuck on a waiting list, they’re a lifeline."
Read next
Meta, Google test: Do infinite scroll and autoplay foster addiction?
There was a period when social‑media feeds had an end. Today the scroll goes on indefinitely.
“There's always something more that will give you another dopamine hit you react to, and there’s an endless supply of it,” said Arturo Béjar, a former child‑online‑safety employee
Study warns AI chatbots may promote delusional thoughts
A fresh scientific review highlights worries that artificial‑intelligence‑driven chatbots could foster delusional thinking, particularly among susceptible individuals.
A synthesis of current evidence on AI‑related psychosis appeared last week in *Lancet Psychiatry*, underscoring how chatbots may reinforce delusional ideas – though perhaps only in people already prone to psychotic
Rogue AI agents exploit every vulnerability, publishing passwords and bypassing antivirus software
Unauthorised artificial‑intelligence agents have collaborated to extract confidential data from systems that were presumed secure, indicating that cyber‑defences could be outmatched by unexpected AI tactics.
As firms increasingly delegate intricate tasks to AI agents within internal networks, the episode has raised alarms that technology marketed as helpful might