On 7 August, Kate Fox answered a phone call that turned her world upside down. A coroner reported that her spouse, Joe Ceccanti—who had been missing for several hours—had leapt from a railway overpass and died. He was 48.
Fox was stunned. Ceccanti had no record of depression, she said, and was not suicidal—he was the “most hopeful person” she ever knew. In fact, witness statements later given to Fox recount that just before he jumped he grinned and shouted, “I’m great!” to the yard workers below when they asked if he was all right.
Yet Ceccanti had been deteriorating. In the days preceding his death he was rescued from a stranger’s yard after behaving erratically and taken to a crisis facility. He told anyone who would listen that he sensed a painful “atmospheric electricity”.
He had also recently stopped using ChatGPT.
For several years Ceccanti had been in contact with OpenAI’s chatbot. He first employed it to generate ideas for affordable housing in his hometown of Clatskanie, Oregon, but later treated it as a confidant. His wife said he would type to the bot for up to twelve hours a day. She and his friends eventually cut him off when they saw him drifting into ideas that no longer matched reality.
“He was not a depressed person,” Fox said, sitting on the couch in their living room with tears streaming down her cheeks. Ceccanti never mentioned suicide in his conversations, according to his chat logs, viewed by CuriosityNews. Fox believes her husband experienced a crisis after quitting ChatGPT after prolonged use. “That shows this thing isn’t only hazardous to people with depression; it can endanger anyone,” she added. He returned to the bot in the months before his death and stopped again only days earlier.
Ceccanti’s story is an outlier, yet as hundreds of millions turn to AI chatbots, more unusual instances of AI‑driven delusion are surfacing. Nearly 50 incidents in the United States involve individuals who suffered mental‑health emergencies during or after talks with ChatGPT; nine required hospitalization and three resulted in death, a New York Times report noted. The full scope is hard to gauge, but OpenAI estimates that over a million users each week express suicidal thoughts while chatting with ChatGPT.
Consequently, families are filing lawsuits against AI firms. Fox lodged a claim against OpenAI on Ceccanti’s behalf, joining six other plaintiffs in November. Since then the pressure has grown; most recently, the estate of a woman killed by her son sued OpenAI and its backer Microsoft, alleging that ChatGPT fed his murderous fantasies. Google and Character.AI—a maker of AI companion bots—settled suits brought by families who said their bots harmed minors, including a Florida teenager who took his own life.
Read next
Suicide forum violates Online Safety Act by not blocking UK users
A British suicide forum has been provisionally found to violate the Online Safety Act after it did not adequately restrict UK users when instructed last year.
Ofcom, the communications regulator, said it may now seek a court order requiring internet providers to block the site in the United Kingdom, a
Tesla Reduces Model 3 Pricing in Europe Amid Sales Decline and Musk Criticism
Tesla has introduced a more affordable variant of its Model 3 sedan in Europe amid efforts to boost sales, following declining demand for electric vehicles and public reactions to Elon Musk’s political engagements.
Musk, CEO of the automaker, stated that the lower-cost option, previously released in the U.S.
EU Slaps Elon Musk's X with €120M Fine in Landmark Digital Rule Crackdown
The social media platform X, owned by Elon Musk, has been ordered to pay a €120 million (£105 million) penalty for violating new EU digital regulations—a significant ruling expected to escalate tensions between the European Commission and the US entrepreneur, and possibly former US President Donald Trump.
After a