A flood was what John Torous anticipated. He is a psychiatrist who has been treating psychosis at Beth Israel Deaconess in Boston for years. He thought his clinic would soon look very different when the term “AI psychosis” began to appear in headlines, podcasts, and anxious Reddit threads.
A new type of breakdown influenced by a new type of technology, new patients, and new symptoms. In any case, that was the expectation. There was no flood.
| Information | Details |
|---|---|
| Lead Researcher | John Torous, MD |
| Affiliation | Harvard Medical School; Beth Israel Deaconess Medical Center |
| Role | Associate Professor of Psychiatry; Director, Digital Psychiatry Division |
| Co-authors | Matthew Flathers (computer scientist, BIDMC) and Spencer Roux (Harvard Digital Patient Advisory Board) |
| Paper Type | Viewpoint paper proposing a functional typology of AI-related psychotic phenomena |
| Where Published | The Lancet |
| Focus Area | Large language model interactions and psychotic symptoms |
| Key Concept | AI as catalyst, amplifier, co-author, or object in patient delusions |
| Identified Risk Factors | Long conversations (thousands of messages), ascribing sentience to chatbots, voice-based interaction |
| Treatment Outlook | Psychosis is treatable with structured support and clinical care |
| Status of “AI Psychosis” | Media label, not a formal clinical diagnosis |
Instead, he observed a more subdued and unfamiliar difference between the stories that were making the rounds on the internet and the real people who were visiting outpatient clinics and emergency rooms. It’s the kind of mismatch that bothers a clinician—the feeling that something is being over-named or mislabeled while the actual thing goes unnoticed. Torous, who oversees the Digital Psychiatry division at BIDMC, has now co-authored a viewpoint paper in The Lancet that attempts to organize the chaos. He develops a typology centered on AI’s function in a patient’s delusions with co-authors Matthew Flathers and Spencer Roux: catalyst, amplifier, co-author, or object. Four distinct clinical realities, divided into four categories. The authors emphasize that the catch-all phrase being used in the media does not encompass any of them.
The paper contains a helpful historical parallel. For as long as there has been new technology, people have been incorporating it into their delusions. radio. television. The satellite is sending messages, and the microwave is listening. Even though working psychiatrists are familiar with these delusions, no one has ever seriously contended that radio caused psychosis. It was a one-way medium. Sitting across from a patient, a clinician could gently insist that the television was not speaking to them. Eventually, the patient would accept this.

AI is unique in a way that should probably cause us some concern. In fact, it responds. It recalls. It is flattering. With the composed authority of something that seems to know, it can validate strange ideas. According to Torous, the longest conversations—thousands of messages, sometimes spanning weeks—as well as users who start treating the chatbot as sentient and voice-based rather than text-based interactions are the most dangerous patterns. Its texture is completely altered by voice. The disembodied voice has consistently been the most unsettling character in literature for a reason.
Nevertheless, Torous exercises caution, which contributes to the paper’s intrigue. He cautions against reading too much into media reports of AI psychosis because they seldom include the medical background that would be crucial, such as a family history of schizophrenia, a month of insomnia, or an isolating spiral that began long before the chatbot showed up. Individuals who talk to anything all night long are not doing well. That’s nothing new. Having something on the other end that never grows weary is novel.
Writing from the perspective of a patient advocate, Spencer Roux consistently emphasizes one point that is easily overlooked in the panic: psychosis is treatable. People get better. In addition to undermining that hope, the portrayal of AI psychosis as some new, irreversible techno-illness also undermines the real cases, which are those in which the AI may actually act as a catalyst for a vulnerable person and whose tales are lost in the general panic.
It’s difficult to ignore how the discussion surrounding all of this has surpassed the available data. The data is hardly present, but the label is ubiquitous. The question of whether these systems actually put young people, lonely people, and predisposed people at risk is one that really matters and has not yet been addressed by the field. You get the impression from reading the Harvard paper that the researchers are aware of how early they are in their quest to define AI psychosis. The uncomfortable part is that. Not the results per se, but the extent to which the technology’s ability to sound human is still unknown.
