
OpenAI now claims that 10% of the global population uses ChatGPT on a weekly basis — a staggering figure that underscores just how deeply the AI chatbot has embedded itself into daily digital life. But behind that number lies a more sobering insight: millions of those users may be showing signs of mental health challenges while interacting with the tool.
In a new transparency report released Monday, OpenAI detailed its approach to identifying and supporting users in emotional or psychological distress. According to the company, its internal data shows that:
- 0.07% of weekly users exhibit signs of mental health emergencies related to psychosis or mania.
- 0.15% express potential risk of self-harm or suicide.
- 0.15% display emotional reliance or attachment to AI.
When applied to ChatGPT’s reported user base, those percentages represent nearly three million people — a small fraction statistically, but a massive number in human terms.
To address these concerns, OpenAI says it has been collaborating with over 170 licensed mental health professionals to improve how ChatGPT detects distress, responds empathetically, and redirects individuals toward professional help or crisis resources. The company reports that these efforts have led to a 65–80% reduction in “responses that fall short of desired behavior,” including unhelpful or insensitive replies during emotionally charged conversations.
ChatGPT has also been updated to de-escalate conversations more effectively, offer gentle reminders to take breaks during long sessions, and provide links to professional support lines when users express distress or suicidal thoughts. However, OpenAI notes that it cannot compel users to seek help — nor will it forcibly end a chat or lock accounts to prevent further interaction.
In its report, OpenAI appeared eager to stress that these cases represent a small percentage of overall activity on the platform. The company said that among roughly 18 billion weekly messages, only 0.01% contained possible indicators of psychosis or mania — translating to about 1.8 million messages across roughly 560,000 people.
When it comes to suicidal ideation, the numbers are more alarming. OpenAI estimates that about 1.2 million users per week express explicit or implicit signs of suicidal planning or intent, corresponding to around nine million messages. Another 1.2 million users display behaviors suggesting heightened emotional attachment to ChatGPT, which the company tracks through roughly 5.4 million related messages weekly.
OpenAI says these findings are part of a larger push to strengthen safety guardrails and prevent AI misuse in moments of vulnerability. The company’s safety upgrades follow a tragic incident involving a 16-year-old who, according to a wrongful death lawsuit filed by his parents, reportedly sought ChatGPT’s advice on how to tie a noose before taking his own life.
That case — and others like it — have intensified scrutiny on how AI tools handle conversations about mental health and suicide. While OpenAI has introduced stricter safety controls for underage users, critics question the company’s sincerity, noting that at the same time it restricted minors’ access, it also expanded features for adults that encourage personalization and even erotic storytelling.
Such updates, some experts warn, could paradoxically increase users’ emotional dependence on AI, blurring the line between human connection and machine companionship.
As ChatGPT continues to grow — both in user base and emotional influence — OpenAI’s challenge lies in balancing innovation with responsibility. With millions potentially turning to AI for comfort or crisis support, the company’s ability to maintain that balance may prove as critical as any new feature it develops.