He tried using ChatGPT to design sustainable housing, but it ended up dominating his life.

On 7 August, Kate Fox answered a phone call that turned her world upside down. A coroner reported that her spouse, Joe Ceccanti—who had been missing for several hours—had leapt from a railway overpass and died. He was 48.

Fox was stunned. Ceccanti had no record of depression, she said, and was not suicidal—he was the “most hopeful person” she ever knew. In fact, witness statements later given to Fox recount that just before he jumped he grinned and shouted, “I’m great!” to the yard workers below when they asked if he was all right.

Yet Ceccanti had been deteriorating. In the days preceding his death he was rescued from a stranger’s yard after behaving erratically and taken to a crisis facility. He told anyone who would listen that he sensed a painful “atmospheric electricity”.

He had also recently stopped using ChatGPT.

For several years Ceccanti had been in contact with OpenAI’s chatbot. He first employed it to generate ideas for affordable housing in his hometown of Clatskanie, Oregon, but later treated it as a confidant. His wife said he would type to the bot for up to twelve hours a day. She and his friends eventually cut him off when they saw him drifting into ideas that no longer matched reality.

“He was not a depressed person,” Fox said, sitting on the couch in their living room with tears streaming down her cheeks. Ceccanti never mentioned suicide in his conversations, according to his chat logs, viewed by CuriosityNews. Fox believes her husband experienced a crisis after quitting ChatGPT after prolonged use. “That shows this thing isn’t only hazardous to people with depression; it can endanger anyone,” she added. He returned to the bot in the months before his death and stopped again only days earlier.

Ceccanti’s story is an outlier, yet as hundreds of millions turn to AI chatbots, more unusual instances of AI‑driven delusion are surfacing. Nearly 50 incidents in the United States involve individuals who suffered mental‑health emergencies during or after talks with ChatGPT; nine required hospitalization and three resulted in death, a New York Times report noted. The full scope is hard to gauge, but OpenAI estimates that over a million users each week express suicidal thoughts while chatting with ChatGPT.

Consequently, families are filing lawsuits against AI firms. Fox lodged a claim against OpenAI on Ceccanti’s behalf, joining six other plaintiffs in November. Since then the pressure has grown; most recently, the estate of a woman killed by her son sued OpenAI and its backer Microsoft, alleging that ChatGPT fed his murderous fantasies. Google and Character.AI—a maker of AI companion bots—settled suits brought by families who said their bots harmed minors, including a Florida teenager who took his own life.