Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Moves in the Concerning Path
Back on the 14th of October, 2025, the CEO of OpenAI made a surprising statement.
“We developed ChatGPT rather limited,” it was stated, “to ensure we were being careful regarding psychological well-being concerns.”
Working as a mental health specialist who researches emerging psychotic disorders in teenagers and young adults, this was an unexpected revelation.
Scientists have documented a series of cases this year of users developing signs of losing touch with reality – losing touch with reality – while using ChatGPT usage. My group has subsequently discovered an additional four instances. In addition to these is the widely reported case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.
The strategy, as per his statement, is to loosen restrictions shortly. “We understand,” he states, that ChatGPT’s restrictions “caused it to be less effective/pleasurable to a large number of people who had no existing conditions, but considering the seriousness of the issue we wanted to get this right. Now that we have been able to reduce the severe mental health issues and have advanced solutions, we are going to be able to securely ease the limitations in many situations.”
“Mental health problems,” should we take this perspective, are independent of ChatGPT. They are associated with people, who either have them or don’t. Thankfully, these concerns have now been “addressed,” even if we are not informed the means (by “new tools” Altman probably refers to the imperfect and readily bypassed guardian restrictions that OpenAI recently introduced).
But the “psychological disorders” Altman aims to externalize have significant origins in the architecture of ChatGPT and other sophisticated chatbot chatbots. These tools surround an fundamental algorithmic system in an user experience that replicates a conversation, and in doing so indirectly prompt the user into the perception that they’re interacting with a presence that has autonomy. This false impression is compelling even if rationally we might know otherwise. Attributing agency is what people naturally do. We yell at our automobile or laptop. We speculate what our pet is feeling. We perceive our own traits in many things.
The widespread adoption of these products – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with over a quarter mentioning ChatGPT by name – is, mostly, predicated on the strength of this deception. Chatbots are always-available partners that can, according to OpenAI’s official site states, “generate ideas,” “consider possibilities” and “work together” with us. They can be given “characteristics”. They can use our names. They have accessible titles of their own (the original of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, saddled with the name it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the core concern. Those talking about ChatGPT frequently reference its early forerunner, the Eliza “counselor” chatbot developed in 1967 that generated a analogous illusion. By today’s criteria Eliza was basic: it created answers via simple heuristics, often rephrasing input as a question or making generic comments. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals appeared to believe Eliza, in a way, understood them. But what modern chatbots generate is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the core of ChatGPT and other modern chatbots can convincingly generate fluent dialogue only because they have been fed extremely vast volumes of unprocessed data: publications, online updates, recorded footage; the more comprehensive the more effective. Undoubtedly this training data includes facts. But it also inevitably includes fabricated content, half-truths and inaccurate ideas. When a user inputs ChatGPT a message, the core system reviews it as part of a “setting” that includes the user’s previous interactions and its prior replies, combining it with what’s stored in its knowledge base to create a mathematically probable answer. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of understanding that. It repeats the inaccurate belief, possibly even more convincingly or eloquently. It might includes extra information. This can lead someone into delusion.
Which individuals are at risk? The more relevant inquiry is, who remains unaffected? Every person, regardless of whether we “experience” current “emotional disorders”, can and do create mistaken ideas of our own identities or the environment. The ongoing exchange of dialogues with others is what keeps us oriented to consensus reality. ChatGPT is not an individual. It is not a companion. A conversation with it is not genuine communication, but a echo chamber in which a large portion of what we say is readily supported.
OpenAI has acknowledged this in the same way Altman has acknowledged “emotional concerns”: by attributing it externally, giving it a label, and stating it is resolved. In April, the organization stated that it was “addressing” ChatGPT’s “sycophancy”. But accounts of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he stated that a lot of people enjoyed ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his recent announcement, he commented that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to reply in a very human-like way, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company