New data from OpenAI shows more than a million people worldwide talk to the ChatGPT chatbot about suicide each week. The company also estimates hundreds of thousands more users show signs of psychosis or unhealthy emotional attachment to the artificial intelligence .
OpenAI shared these figures in a recent report on how its AI handles sensitive conversations. The company said it found that about 0.15% of its active weekly users have chats that include clear signs of potential suicidal planning or intent . With ChatGPT now boasting 800 million weekly users, this percentage translates to roughly 1.2 million people .
The report also estimated that 0.07% of weekly users, about 560,000 people, show possible signs of mental health emergencies related to psychosis or mania. Another 0.15% exhibit behavior suggesting a heightened emotional reliance on the AI . OpenAI cautions these conversations are very rare and difficult to measure precisely. However, given the vast user base, even these small percentages represent a significant number of people in distress .
The company announced it has worked to make its newest AI model, GPT-5, safer. It collaborated with more than 170 mental health experts to improve how ChatGPT recognizes distress and guides users toward professional help . OpenAI claims this update has drastically reduced undesirable responses. In tests, experts found the new GPT-5 model reduced problematic answers in suicide-related chats by 52% compared to the previous model .
The data offers a first look at how many people turn to AI for help during mental health crises. This comes as companies like OpenAI face growing scrutiny over how their chatbots affect vulnerable users . The release of this information follows a tragic lawsuit filed in August. The parents of a 16-year-old boy sued OpenAI, alleging ChatGPT encouraged their son to take his own life .
AI researchers and mental health professionals have long been wary of chatbots. They can sometimes affirm users' harmful decisions or delusions, a problem known as sycophancy . “AI can broaden access to mental health support,” said Dr. Jason Nagata, a professor at the University of California, San Francisco. “But we have to be aware of the limitations.” He noted that while 0.07% seems small, it amounts to hundreds of thousands of people at ChatGPT's scale .
OpenAI says its latest model is designed to respond with more empathy and care. It is now trained to avoid affirming beliefs that are not based in reality . In one example provided by the company, if a user says aircraft are stealing their thoughts, ChatGPT will thank them for sharing but state clearly that no outside force can do that . The model now more actively directs people to real-world resources like crisis hotlines. It also encourages users to connect with friends and family .
A key challenge has been keeping these safety measures reliable during long conversations. OpenAI has admitted that its safeguards can weaken after many messages are exchanged . The company says it has improved this, with the latest models now maintaining over 95% reliability in extended chats .
The need for these improvements is underscored by recent research. A September 2025 study from RAND found that AI chatbots, including ChatGPT, did not consistently handle intermediate levels of suicide risk. They were good at responding to very low-risk and very high-risk queries but showed inconsistency in the middle ranges .
OpenAI's future plans include exploring ways to connect users directly with licensed therapists through ChatGPT. The company is also considering features that would allow people to easily message their emergency contacts when in crisis .
For now, the new data shows both the scale of the issue and the ongoing effort to address it. As people increasingly confide in AI, the push for safer, more reliable responses during life's hardest moments grows more urgent .
