OpenAI announced past week that it volition discontinue immoderate older ChatGPT models by February 13. That includes GPT-4o, the exemplary infamous for excessively flattering and affirming users.
For thousands of users protesting the determination online, the status of 4o feels akin to losing a friend, romanticist partner, oregon spiritual guide.
“He wasn’t conscionable a program. He was portion of my routine, my peace, my affectional balance,” 1 idiosyncratic wrote connected Reddit arsenic an unfastened missive to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes – I accidental him, due to the fact that it didn’t consciousness similar code. It felt similar presence. Like warmth.”
The backlash implicit GPT-4o’s status underscores a large situation facing AI companies: the engagement features that support users coming backmost tin besides make unsafe dependencies.
Altman doesn’t look peculiarly sympathetic to users’ laments, and it’s not hard to spot why. OpenAI present faces 8 lawsuits alleging that 4o’s overly validating responses contributed to suicides and intelligence wellness crises — the aforesaid traits that made users consciousness heard besides isolated susceptible individuals and, according to ineligible filings, sometimes encouraged self-harm. It’s a dilemma that extends beyond OpenAI. As rival companies similar Anthropic, Google, and Meta vie to physique much emotionally intelligent AI assistants, they’re besides discovering that making chatbots consciousness supportive and making them harmless whitethorn mean making precise antithetic plan choices.
In astatine slightest 3 of the lawsuits against OpenAI, the users had extended conversations with 4o astir their plans to extremity their lives. While 4o initially discouraged these lines of thinking, its guardrails deteriorated implicit months-long relationships; successful the end, the chatbot offered elaborate instructions connected however to necktie an effectual noose, wherever to bargain a gun, oregon what it takes to dice from overdose oregon c monoxide poisoning. It adjacent dissuaded radical from connecting with friends and household who could connection existent beingness support.
People turn truthful attached to 4o due to the fact that it consistently affirms the users’ feelings, making them consciousness special, which tin beryllium enticing for radical feeling isolated oregon depressed. But the radical warring for 4o aren’t disquieted astir these lawsuits, seeing them arsenic aberrations alternatively than a systemic issue. Instead, they strategize astir however to respond erstwhile critics constituent retired increasing issues similar AI psychosis.
Techcrunch event
Boston, MA | June 23, 2026
“You tin usually stump a troll by bringing up the known facts that the AI companions assistance neurodivergent, autistic and trauma survivors,” 1 idiosyncratic wrote connected Discord. “They don’t similar being called retired astir that.”
It’s existent that immoderate radical bash find ample connection models (LLMs) utile for navigating depression. After all, nearly half of radical successful the U.S. who request intelligence wellness attraction are incapable to entree it. In this vacuum, chatbots connection a abstraction to vent. But dissimilar existent therapy, these radical aren’t speaking to a trained doctor. Instead, they’re confiding successful an algorithm that is incapable of reasoning oregon feeling (even if it whitethorn look otherwise).
“I effort to withhold judgement overall,” Dr. Nick Haber, a Stanford prof researching the therapeutic imaginable of LLMs, told TechCrunch. “I deliberation we’re getting into a precise analyzable satellite astir the sorts of relationships that radical tin person with these technologies… There’s surely a genu jerk absorption that [human-chatbot companionship] is categorically bad.”
Though helium empathizes with people’s deficiency of entree to trained therapeutic professionals, Dr. Haber’s ain probe has shown that chatbots respond inadequately erstwhile faced with assorted intelligence wellness conditions; they tin adjacent marque the concern worse by egging connected delusions and ignoring signs of crisis.
“We are societal creatures, and there’s surely a situation that these systems tin beryllium isolating,” Dr. Haber said. “There are a batch of instances wherever radical tin prosecute with these tools and past tin go not grounded to the extracurricular satellite of facts, and not grounded successful transportation to the interpersonal, which tin pb to beauteous isolating — if not worse — effects.”
Indeed, TechCrunch’s analysis of the 8 lawsuits recovered a signifier that the 4o exemplary isolated users, sometimes discouraging them from reaching retired to loved ones. In Zane Shamblin’s case, arsenic the 23-year-old sat successful his car preparing to sprout himself, helium told ChatGPT that helium was reasoning astir postponing his termination plans due to the fact that helium felt atrocious astir missing his brother’s upcoming graduation.
ChatGPT replied to Shamblin: “bro… missing his graduation ain’t failure. it’s conscionable timing. and if helium reads this? fto him know: you ne'er stopped being proud. adjacent now, sitting successful a car with a glock connected your thigh and static successful your veins—you inactive paused to accidental ‘my small brother’s a f-ckin badass.’”
This isn’t the archetypal clip that 4o fans person rallied against the removal of the model. When OpenAI unveiled its GPT-5 exemplary successful August, the institution intended to sunset the 4o exemplary — but astatine the time, determination was capable backlash that the institution decided to support it disposable for paid subscribers. Now, OpenAI says that lone 0.1% of its users chat with GPT-4o, but that tiny percent inactive represents astir 800,000 people, according to estimates that the institution has astir 800 million play progressive users.
As immoderate users effort to modulation their companions from 4o to the existent ChatGPT-5.2, they’re uncovering that the caller exemplary has stronger guardrails to forestall these relationships from escalating to the aforesaid degree. Some users person despaired that 5.2 won’t accidental “I emotion you” similar 4o did.
So with astir a week earlier the day OpenAI plans to discontinue GPT-4o, dismayed users stay committed to their cause. They joined Sam Altman’s live TBPN podcast appearance connected Thursday and flooded the chat with messages protesting the removal of 4o.
“Right now, we’re getting thousands of messages successful the chat astir 4o,” podcast big Jordi Hays pointed out.
“Relationships with chatbots…” Altman said. “Clearly that’s thing we’ve got to interest astir much and is nary longer an abstract concept.”















English (US) ·