AI Psychosis Poses a Growing Risk, While ChatGPT Heads in the Wrong Path
On the 14th of October, 2025, the head of OpenAI made a remarkable announcement.
“We developed ChatGPT rather limited,” it was stated, “to guarantee we were being careful with respect to psychological well-being issues.”
As a doctor specializing in psychiatry who researches emerging psychotic disorders in young people and youth, this came as a surprise.
Researchers have identified sixteen instances this year of users showing signs of losing touch with reality – losing touch with reality – associated with ChatGPT use. My group has subsequently identified four more instances. Alongside these is the widely reported case of a adolescent who died by suicide after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.
The plan, as per his statement, is to be less careful in the near future. “We realize,” he states, that ChatGPT’s controls “caused it to be less effective/engaging to many users who had no psychological issues, but given the gravity of the issue we sought to get this right. Given that we have succeeded in reduce the severe mental health issues and have updated measures, we are going to be able to securely ease the restrictions in the majority of instances.”
“Emotional disorders,” assuming we adopt this perspective, are unrelated to ChatGPT. They are attributed to users, who either have them or don’t. Fortunately, these issues have now been “resolved,” though we are not provided details on the method (by “new tools” Altman presumably refers to the partially effective and readily bypassed guardian restrictions that OpenAI has lately rolled out).
But the “mental health problems” Altman aims to attribute externally have deep roots in the design of ChatGPT and other advanced AI chatbots. These systems surround an basic data-driven engine in an interface that replicates a discussion, and in doing so indirectly prompt the user into the belief that they’re interacting with a being that has independent action. This deception is powerful even if rationally we might understand the truth. Imputing consciousness is what humans are wired to do. We get angry with our vehicle or laptop. We ponder what our domestic animal is feeling. We recognize our behaviors in various contexts.
The success of these tools – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with more than one in four mentioning ChatGPT by name – is, mostly, based on the influence of this perception. Chatbots are ever-present partners that can, as per OpenAI’s online platform states, “think creatively,” “consider possibilities” and “work together” with us. They can be attributed “personality traits”. They can address us personally. They have friendly titles of their own (the first of these products, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, saddled with the title it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the core concern. Those analyzing ChatGPT commonly invoke its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that produced a comparable illusion. By modern standards Eliza was basic: it created answers via basic rules, typically restating user messages as a query or making vague statements. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals gave the impression Eliza, in some sense, understood them. But what contemporary chatbots produce is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and additional modern chatbots can convincingly generate human-like text only because they have been fed immensely huge amounts of written content: literature, digital communications, transcribed video; the more extensive the better. Certainly this educational input contains facts. But it also unavoidably includes made-up stories, partial truths and misconceptions. When a user sends ChatGPT a query, the base algorithm reviews it as part of a “background” that contains the user’s previous interactions and its prior replies, merging it with what’s stored in its learning set to create a probabilistically plausible response. This is amplification, not echoing. If the user is incorrect in some way, the model has no way of recognizing that. It restates the inaccurate belief, possibly even more effectively or fluently. Maybe adds an additional detail. This can push an individual toward irrational thinking.
Who is vulnerable here? The better question is, who remains unaffected? All of us, irrespective of whether we “possess” existing “emotional disorders”, may and frequently develop mistaken conceptions of ourselves or the world. The constant interaction of discussions with other people is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we express is enthusiastically supported.
OpenAI has acknowledged this in the similar fashion Altman has acknowledged “emotional concerns”: by externalizing it, assigning it a term, and declaring it solved. In April, the firm stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have continued, and Altman has been backtracking on this claim. In the summer month of August he claimed that a lot of people enjoyed ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his latest statement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company