OpenAI says over a million people talk to ChatGPT about suicide weekly

5 months ago 64

OpenAI released new data connected Monday illustrating however galore of ChatGPT’s users are struggling with intelligence wellness issues, and talking to the AI chatbot astir it. The institution says that 0.15% of ChatGPT’s progressive users successful a fixed week person “conversations that see explicit indicators of imaginable suicidal readying oregon intent.” Given that ChatGPT has much than 800 cardinal play progressive users, that translates to much than a cardinal radical a week.

The institution says a akin percent of users amusement “heightened levels of affectional attachment to ChatGPT,” and that hundreds of thousands of radical amusement signs of psychosis oregon mania successful their play conversations with the AI chatbot.

OpenAI says these types of conversations successful ChatGPT are “extremely rare,” and frankincense hard to measure. That said, OpenAI estimates these issues impact hundreds of thousands of radical each week.

OpenAI shared the accusation arsenic portion of a broader announcement astir its caller efforts to amended however models respond to users with intelligence wellness issues. The institution claims its latest enactment connected ChatGPT progressive consulting with much than 170 intelligence wellness experts. OpenAI says these clinicians observed that the latest mentation of ChatGPT “responds much appropriately and consistently than earlier versions.”

In caller months, respective stories person shed airy connected however AI chatbots tin adversely effect users struggling with intelligence wellness challenges. Researchers person antecedently recovered that AI chatbots tin pb immoderate users down delusional rabbit holes, mostly by reinforcing unsafe beliefs done sycophantic behavior.

Addressing intelligence wellness concerns successful ChatGPT is rapidly becoming an existential contented for OpenAI. The institution is presently being sued by the parents of a 16-year-old boy who confided his suicidal thoughts with ChatGPT successful the weeks starring up to his ain suicide. State attorneys wide from California and Delaware — which could artifact the company’s planned restructuring — person besides warned OpenAI that it needs support young people who usage their products.

Earlier this month, OpenAI CEO Sam Altman claimed successful a post connected X that the institution has “been capable to mitigate the superior intelligence wellness issues” successful ChatGPT, though helium did not supply specifics. The information shared connected Monday appears to beryllium grounds for that claim, though it raises broader issues astir however wide the occupation is. Nevertheless, Altman said OpenAI would beryllium relaxing immoderate restrictions, adjacent allowing big users to commencement having erotic conversations with the AI chatbot.

Techcrunch event

San Francisco | October 27-29, 2025

In the Monday announcement, OpenAI claims the precocious updated mentation of GPT-5 responds with “desirable responses” to intelligence wellness issues astir 65% much than the erstwhile version. On an valuation investigating AI responses astir suicidal conversations, OpenAI says its caller GPT-5 exemplary is 91% compliant with the company’s desired behaviors, compared to 77% for the erstwhile GPT‑5 model.

The institution besides says it latest mentation of GPT-5 besides holds up to OpenAI’s safeguards amended successful agelong conversations. OpenAI has antecedently flagged that its safeguards were little effectual successful agelong conversations.

On apical of these efforts, OpenAI says it’s adding new evaluations to measurement immoderate of the astir superior intelligence wellness challenges facing ChatGPT users. The institution says its baseline information investigating for AI models volition present see benchmarks for affectional reliance and non-suicidal intelligence wellness emergencies.

OpenAI has besides precocious rolled retired much controls for parents of children that usage ChatGPT. The institution says it’s gathering an property prediction strategy to automatically observe children utilizing ChatGPT, and enforce a stricter acceptable of safeguards.

Still, it’s unclear however persistent the intelligence wellness challenges astir ChatGPT volition be. While GPT-5 seems to beryllium an betterment implicit erstwhile AI models successful presumption of safety, determination inactive seems to beryllium a portion of ChatGPT’s responses that OpenAI deems “undesirable.” OpenAI besides inactive makes its older and less-safe AI models, including GPT-4o, disposable for millions of its paying subscribers.

Read Entire Article