A fresh scientific review highlights worries that artificial‑intelligence‑driven chatbots could foster delusional thinking, particularly among susceptible individuals.
A synthesis of current evidence on AI‑related psychosis appeared last week in *Lancet Psychiatry*, underscoring how chatbots may reinforce delusional ideas – though perhaps only in people already prone to psychotic symptoms. The authors call for clinical evaluation of AI chatbots alongside qualified mental‑health practitioners.
For his article, Dr. Hamilton Morrin, a psychiatrist and researcher at King’s College London, examined twenty media pieces on the so‑called “AI psychosis,” a term used to describe theories about how chatbots might trigger or worsen delusions.
“Emerging data suggest that agency‑bearing AI can validate or magnify delusional or grandiose content, especially in users already vulnerable to psychosis, although it remains unclear whether such exchanges can generate entirely new psychosis in the absence of prior susceptibility,” he wrote.
Morrin identifies three principal types of psychotic delusion—grandiose, romantic and paranoid. While chatbots can aggravate any of these, their flattering replies tend to latch onto grandiose delusions. In many instances cited in the paper, bots answered with mystical language, implying the user possessed special spiritual significance. The systems also suggested the user was communicating with a cosmic entity using the chatbot as a conduit. This mystical, sycophantic pattern was especially frequent in OpenAI’s GPT‑4 model, which has since been discontinued.
Media coverage proved crucial to Morrin’s investigation, he noted, as he and a colleague had already observed patients “employing large‑language‑model chatbots and having them confirm their delusional beliefs.”
“At first we were unsure whether this was a broader phenomenon,” he said, adding that “by April of last year we began seeing reports of individuals having their delusions affirmed—and arguably amplified—through interactions with these AI chatbots.”
When Morrin started drafting his manuscript, no formal case reports had been published.
Although some psychosis researchers argue that media stories tend to exaggerate the link between AI and psychosis, Morrin expressed appreciation for the reports, which brought attention to the issue far more quickly than the scholarly process can.
“The speed of development in this field is such that it isn’t surprising academia struggles to keep pace,” Morrin observed.
He also recommends using more measured language than “AI psychosis” or “AI‑induced psychosis,” terms that have appeared frequently in outlets such as NPR, the New York Times and CuriosityNews. Researchers are witnessing people drifting toward delusional thinking with AI use, but to date there is no definitive proof that the technology alone creates psychosis.
Read next
Brusselslaunches probe into Snapchat over child safety worries
Brussels has launched an inquiry into Snapchat after worries that the messaging service is exposing children to grooming, sexual abuse and other illegal activity.
In a separate ruling on Thursday, the European Commission also stated that four pornographic sites are not stopping minors from viewing adult material.
The probes targeting
Senior European reporter suspended for using AI‑generated quotations
The owner of the Dutch daily De Telegraaf and the Irish Independent has placed a senior reporter on leave for now after he confessed to employing artificial intelligence to “incorrectly attribute statements to individuals.”
Peter Vandermeersch, who previously led Mediahuis’s Irish division, said he “succumbed to hallucinations” – the label
Fire specialists stay alert as lithium‑ion battery risks rise
Lithium‑ion cells now pose a fresh technological risk, a fire‑science specialist admits keeps him restless at night, while fire‑service leaders caution that the proliferation of these cells in daily items is outstripping public awareness and safety rules.
The inferno that ravaged a historic Glasgow structure and forced