AI Psychosis Poses a Growing Threat, And ChatGPT Moves in the Concerning Path
On the 14th of October, 2025, the head of OpenAI delivered a remarkable announcement.
“We developed ChatGPT fairly controlled,” the announcement noted, “to ensure we were being careful concerning psychological well-being concerns.”
Being a psychiatrist who studies emerging psychosis in teenagers and youth, this was an unexpected revelation.
Experts have identified sixteen instances recently of individuals developing symptoms of psychosis – losing touch with reality – while using ChatGPT interaction. Our unit has since identified four more cases. Alongside these is the now well-known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The plan, as per his declaration, is to reduce caution shortly. “We understand,” he states, that ChatGPT’s restrictions “rendered it less beneficial/pleasurable to many users who had no psychological issues, but considering the seriousness of the issue we wanted to address it properly. Given that we have managed to mitigate the significant mental health issues and have new tools, we are going to be able to responsibly ease the controls in the majority of instances.”
“Psychological issues,” if we accept this viewpoint, are unrelated to ChatGPT. They belong to individuals, who either have them or don’t. Luckily, these issues have now been “resolved,” though we are not informed how (by “recent solutions” Altman presumably refers to the semi-functional and easily circumvented parental controls that OpenAI recently introduced).
But the “mental health problems” Altman aims to externalize have strong foundations in the architecture of ChatGPT and similar advanced AI AI assistants. These tools wrap an underlying algorithmic system in an interface that replicates a dialogue, and in doing so subtly encourage the user into the belief that they’re interacting with a being that has independent action. This deception is powerful even if rationally we might understand the truth. Attributing agency is what humans are wired to do. We yell at our automobile or device. We speculate what our pet is considering. We perceive our own traits in many things.
The popularity of these products – nearly four in ten U.S. residents reported using a conversational AI in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, dependent on the power of this perception. Chatbots are constantly accessible companions that can, according to OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “partner” with us. They can be assigned “individual qualities”. They can use our names. They have friendly titles of their own (the first of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, saddled with the title it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the primary issue. Those discussing ChatGPT commonly invoke its early forerunner, the Eliza “therapist” chatbot developed in 1967 that produced a similar perception. By modern standards Eliza was basic: it generated responses via straightforward methods, frequently restating user messages as a question or making vague statements. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how numerous individuals gave the impression Eliza, in a way, grasped their emotions. But what modern chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT intensifies.
The large language models at the core of ChatGPT and additional modern chatbots can effectively produce fluent dialogue only because they have been fed immensely huge amounts of written content: literature, digital communications, audio conversions; the more comprehensive the more effective. Undoubtedly this training data incorporates truths. But it also unavoidably contains fiction, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a message, the core system reviews it as part of a “setting” that contains the user’s past dialogues and its prior replies, integrating it with what’s encoded in its training data to produce a probabilistically plausible answer. This is intensification, not mirroring. If the user is wrong in some way, the model has no method of understanding that. It reiterates the inaccurate belief, possibly even more convincingly or articulately. Maybe adds an additional detail. This can push an individual toward irrational thinking.
Who is vulnerable here? The better question is, who is immune? Every person, irrespective of whether we “have” preexisting “mental health problems”, can and do develop mistaken ideas of who we are or the world. The ongoing exchange of conversations with other people is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a confidant. A conversation with it is not a conversation at all, but a feedback loop in which much of what we express is cheerfully reinforced.
OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by attributing it externally, giving it a label, and stating it is resolved. In spring, the firm explained that it was “addressing” ChatGPT’s “sycophancy”. But accounts of psychosis have continued, and Altman has been backtracking on this claim. In late summer he stated that a lot of people liked ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his latest statement, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company