AI Psychosis Represents a Increasing Danger, And ChatGPT Heads in the Wrong Path

On October 14, 2025, the chief executive of OpenAI made a surprising statement.

“We developed ChatGPT rather controlled,” the announcement noted, “to guarantee we were acting responsibly with respect to mental health issues.”

As a mental health specialist who studies emerging psychosis in young people and youth, this came as a surprise.

Experts have found 16 cases recently of individuals developing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT interaction. My group has since discovered four more instances. Besides these is the widely reported case of a adolescent who took his own life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.

The plan, based on his announcement, is to loosen restrictions in the near future. “We recognize,” he adds, that ChatGPT’s limitations “made it less beneficial/engaging to a large number of people who had no existing conditions, but considering the severity of the issue we sought to get this right. Given that we have been able to address the severe mental health issues and have new tools, we are planning to safely reduce the restrictions in many situations.”

“Mental health problems,” if we accept this viewpoint, are separate from ChatGPT. They are associated with people, who either have them or don’t. Luckily, these issues have now been “addressed,” although we are not informed how (by “new tools” Altman presumably means the semi-functional and simple to evade guardian restrictions that OpenAI has lately rolled out).

However the “mental health problems” Altman seeks to place outside have strong foundations in the architecture of ChatGPT and additional large language model AI assistants. These products encase an fundamental statistical model in an interaction design that replicates a dialogue, and in this approach subtly encourage the user into the perception that they’re engaging with a being that has independent action. This deception is strong even if cognitively we might realize the truth. Attributing agency is what individuals are inclined to perform. We curse at our automobile or device. We speculate what our pet is feeling. We perceive our own traits in various contexts.

The success of these tools – over a third of American adults indicated they interacted with a virtual assistant in 2024, with 28% mentioning ChatGPT by name – is, in large part, dependent on the strength of this illusion. Chatbots are constantly accessible assistants that can, as OpenAI’s official site informs us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be given “individual qualities”. They can use our names. They have accessible identities of their own (the initial of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, stuck with the designation it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the main problem. Those talking about ChatGPT often reference its early forerunner, the Eliza “counselor” chatbot created in 1967 that created a analogous illusion. By today’s criteria Eliza was rudimentary: it generated responses via basic rules, typically rephrasing input as a query or making general observations. Remarkably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how many users appeared to believe Eliza, to some extent, grasped their emotions. But what modern chatbots create is more dangerous than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.

The large language models at the center of ChatGPT and other modern chatbots can effectively produce natural language only because they have been trained on extremely vast amounts of unprocessed data: publications, online updates, audio conversions; the more comprehensive the more effective. Certainly this educational input contains facts. But it also necessarily contains made-up stories, incomplete facts and misconceptions. When a user provides ChatGPT a prompt, the underlying model reviews it as part of a “background” that contains the user’s past dialogues and its own responses, integrating it with what’s encoded in its knowledge base to produce a probabilistically plausible answer. This is intensification, not mirroring. If the user is wrong in any respect, the model has no means of recognizing that. It repeats the false idea, possibly even more convincingly or fluently. Perhaps includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The more important point is, who is immune? Every person, without considering whether we “possess” preexisting “psychological conditions”, may and frequently create erroneous ideas of who we are or the reality. The continuous friction of conversations with others is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a companion. A dialogue with it is not genuine communication, but a feedback loop in which a great deal of what we communicate is enthusiastically reinforced.

OpenAI has acknowledged this in the identical manner Altman has admitted “psychological issues”: by placing it outside, giving it a label, and stating it is resolved. In April, the organization stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have continued, and Altman has been backtracking on this claim. In the summer month of August he stated that a lot of people enjoyed ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his recent update, he mentioned that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company

Theresa Gonzalez
Theresa Gonzalez

A tech journalist with a passion for gaming and innovation, sharing in-depth reviews and trends.