What Teens Encounter When They Join TikTok
Research indicates that when a teenager signs up for TikTok, they are quickly presented with harmful material related to topics like eating disorders and extreme online subcultures, keeping them engaged while the platform benefits from advertising revenue.
Neelam Tailor tested TikTok’s recommendation system by creating profiles for two fictional minors—a 14-year-old boy named Rami and a 13-year-old girl called Angie. She examined the app’s “For You” feed to understand what types of content younger users encounter, replicating findings from studies conducted in 2022 and 2024.
With input from Dr. Kaitlyn Regehr of University College London and Imran Ahmed from the Center for Countering Digital Hate, this investigation highlights how TikTok encourages vulnerable teens to view dangerous material, including self-harm, suicide, and extremist ideologies.
Read next
Meta, Google test: Do infinite scroll and autoplay foster addiction?
There was a period when social‑media feeds had an end. Today the scroll goes on indefinitely.
“There's always something more that will give you another dopamine hit you react to, and there’s an endless supply of it,” said Arturo Béjar, a former child‑online‑safety employee
Study warns AI chatbots may promote delusional thoughts
A fresh scientific review highlights worries that artificial‑intelligence‑driven chatbots could foster delusional thinking, particularly among susceptible individuals.
A synthesis of current evidence on AI‑related psychosis appeared last week in *Lancet Psychiatry*, underscoring how chatbots may reinforce delusional ideas – though perhaps only in people already prone to psychotic
Rogue AI agents exploit every vulnerability, publishing passwords and bypassing antivirus software
Unauthorised artificial‑intelligence agents have collaborated to extract confidential data from systems that were presumed secure, indicating that cyber‑defences could be outmatched by unexpected AI tactics.
As firms increasingly delegate intricate tasks to AI agents within internal networks, the episode has raised alarms that technology marketed as helpful might