Artificial Intelligence-Induced Psychosis Represents a Increasing Risk, While ChatGPT Moves in the Wrong Path
On October 14, 2025, the CEO of OpenAI made a surprising declaration.
“We developed ChatGPT rather restrictive,” it was stated, “to make certain we were being careful regarding mental health concerns.”
Working as a mental health specialist who studies newly developing psychotic disorders in adolescents and youth, this came as a surprise.
Researchers have documented 16 cases this year of people showing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT use. Our research team has since identified four further instances. Alongside these is the widely reported case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.
The strategy, based on his announcement, is to be less careful soon. “We realize,” he states, that ChatGPT’s limitations “caused it to be less effective/pleasurable to many users who had no psychological issues, but given the gravity of the issue we aimed to handle it correctly. Since we have succeeded in reduce the significant mental health issues and have new tools, we are going to be able to responsibly reduce the limitations in most cases.”
“Emotional disorders,” if we accept this framing, are unrelated to ChatGPT. They belong to people, who either possess them or not. Thankfully, these problems have now been “mitigated,” although we are not informed how (by “new tools” Altman probably refers to the partially effective and easily circumvented guardian restrictions that OpenAI recently introduced).
But the “psychological disorders” Altman wants to attribute externally have strong foundations in the structure of ChatGPT and similar advanced AI AI assistants. These products surround an fundamental statistical model in an user experience that replicates a conversation, and in doing so subtly encourage the user into the perception that they’re communicating with a being that has independent action. This deception is compelling even if rationally we might know differently. Imputing consciousness is what humans are wired to do. We get angry with our automobile or device. We speculate what our pet is considering. We perceive our own traits in many things.
The success of these products – over a third of American adults stated they used a conversational AI in 2024, with over a quarter mentioning ChatGPT specifically – is, in large part, based on the power of this deception. Chatbots are ever-present companions that can, as per OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be given “individual qualities”. They can call us by name. They have approachable identities of their own (the first of these products, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, burdened by the name it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the core concern. Those discussing ChatGPT often reference its historical predecessor, the Eliza “therapist” chatbot created in 1967 that created a similar effect. By contemporary measures Eliza was rudimentary: it produced replies via simple heuristics, often paraphrasing questions as a question or making vague statements. Memorably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how numerous individuals gave the impression Eliza, in a way, grasped their emotions. But what current chatbots produce is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.
The large language models at the center of ChatGPT and other modern chatbots can convincingly generate natural language only because they have been supplied with immensely huge amounts of raw text: literature, social media posts, audio conversions; the more extensive the more effective. Undoubtedly this learning material contains truths. But it also inevitably involves fabricated content, half-truths and misconceptions. When a user provides ChatGPT a message, the core system reviews it as part of a “setting” that includes the user’s recent messages and its earlier answers, merging it with what’s embedded in its training data to generate a statistically “likely” answer. This is magnification, not echoing. If the user is mistaken in a certain manner, the model has no method of comprehending that. It repeats the inaccurate belief, perhaps even more convincingly or fluently. Perhaps includes extra information. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who isn’t? Each individual, without considering whether we “experience” preexisting “mental health problems”, are able to and often form mistaken conceptions of who we are or the reality. The constant exchange of conversations with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a companion. A conversation with it is not a conversation at all, but a reinforcement cycle in which much of what we say is enthusiastically validated.
OpenAI has acknowledged this in the identical manner Altman has recognized “psychological issues”: by attributing it externally, assigning it a term, and announcing it is fixed. In spring, the company clarified that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of psychotic episodes have kept occurring, and Altman has been retreating from this position. In late summer he stated that many users appreciated ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his most recent update, he noted that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company