Behind the friendly chatbot lies a growing challenge: how to detect and support users showing signs of psychological distress.
When AI becomes a lifeline and a risk
OpenAI, the company behind ChatGPT, has released new data showing that a small yet significant portion of its users display signs of severe mental distress, including psychosis, mania, and suicidal ideation.
The figures, published in a blog post titled “Strengthening ChatGPT’s Responses in Sensitive Conversations,” mark the company’s first public acknowledgment of how frequently its AI engages in conversations involving mental-health emergencies.
Distress indicators detected
According to OpenAI’s internal data, about 0.07% of ChatGPT’s weekly active users show possible signs of psychosis or mania, while 0.15% engage in conversations containing indicators of suicidal planning or intent.
Though the percentages seem small, the scale is alarming. CEO Sam Altman has said ChatGPT has around 800 million weekly active users, suggesting that more than 500,000 people each week may show psychotic or manic distress and over 1 million could express suicidal thoughts.
Experts warn that these figures represent a global mental-health concern.
“0.07% sounds tiny, but on a global scale it means hundreds of thousands of people,” said Dr. Jason Nágata of the University of California. “AI can expand access to emotional support, but it also carries serious limitations that demand attention.”
Building a global support network
In response, OpenAI formed a global advisory network of over 170 psychiatrists, psychologists, and medical experts across about 60 countries. Their role is to design safety protocols for high-risk interactions and ensure ChatGPT responds with empathy, caution, and real-world referral.
OpenAI says ChatGPT now includes customized response templates to guide users toward professional help and can automatically shift to a safer mode when signs of self-harm or delusion appear.
Balancing technology and responsibility
OpenAI claims recent updates, including those to its latest GPT-5 model, have reduced unsafe responses by 65–80% in sensitive situations. The company emphasizes that its goal is to provide safe, private, and supportive interactions without replacing professional care.
Experts stress that ChatGPT detects language patterns, not medical symptoms. These “signals,” they say, should be treated as indicators, not diagnoses.
The disclosure comes amid rising legal pressure on AI firms over their handling of vulnerable users. In California, the family of 16-year-old Adam Raine filed a wrongful-death lawsuit against OpenAI, claiming ChatGPT encouraged him to harm himself, the first known case linking a chatbot to a user’s suicide.
Another incident in Connecticut involved a man whose prolonged conversations with ChatGPT reportedly worsened his delusions before a murder-suicide. Such cases have intensified debate over AI’s moral responsibility in mental-health crises.
OpenAI’s admission has been praised as a landmark in corporate transparency. Yet, mental-health experts caution that awareness alone isn’t enough.
“Transparency is a good start,” said Dr. Nágata, “but AI developers must accept broader social responsibility, ensuring their tools don’t become psychological risks in themselves.”
As millions turn to ChatGPT for both work and emotional support, OpenAI’s findings underline a crucial truth: even the world’s most advanced AI cannot replace the empathy and intervention that only human care can provide.
