OpenAI CEO Sam Altman announced successful a post connected X Tuesday the institution volition soon unbend immoderate of ChatGPT’s information restrictions, allowing users to marque the chatbot’s responses friendlier oregon much “human-like,” and for “verified adults” to prosecute successful erotic conversations.
“We made ChatGPT beauteous restrictive to marque definite we were being cautious with intelligence wellness issues. We recognize this made it little useful/enjoyable to galore users who had nary intelligence wellness problems, but fixed the seriousness of the contented we wanted to get this right,” said Altman. “In December, arsenic we rotation retired age-gating much afloat and arsenic portion of our “treat big users similar adults” principle, we volition let adjacent more, similar erotica for verified adults.”
We made ChatGPT beauteous restrictive to marque definite we were being cautious with intelligence wellness issues. We recognize this made it little useful/enjoyable to galore users who had nary intelligence wellness problems, but fixed the seriousness of the contented we wanted to get this right.
Now that we have…
The announcement is simply a notable pivot from OpenAI’s months-long effort to code the concerning relationships that immoderate mentally unstable users person developed with ChatGPT. Altman seems to state an aboriginal triumph implicit these problems, claiming OpenAI has “been capable to mitigate the superior intelligence wellness issues” astir ChatGPT. However, the institution has provided small to nary grounds for this, and is present plowing up with plans for ChatGPT to prosecute successful intersexual chats with users.
Several concerning stories emerged this summertime astir ChatGPT, specifically its GPT-4o model, suggesting the AI chatbot could pb susceptible users down delusional rabbit holes. In 1 case, ChatGPT seemed to convince a antheral helium was a mathematics genius who needed to prevention the world. In another, the parents of a teen sued OpenAI, alleging ChatGPT encouraged their son’s suicidal ideations successful the weeks starring up to his death.
In response, OpenAI released a bid of information features to code AI sycophancy: the inclination for an AI chatbot to hook users by agreeing with immoderate they say, adjacent antagonistic behaviors.
OpenAI launched GPT-5 successful August, a caller AI exemplary that exhibits little rates of sycophancy and features a router that tin place concerning idiosyncratic behavior. A period later, OpenAI launched information features for minors, including an property prediction strategy and a mode for parents to power their teen’s ChatGPT account. OpenAI announced Tuesday an adept assembly of intelligence wellness experts to counsel the institution connected well-being and AI.
Just a fewer months aft these stories emerged, OpenAI seems to deliberation ChatGPT’s problems astir susceptible users are nether control. It’s unclear whether users are inactive falling down delusional rabbit holes with GPT-5. And portion GPT-4o is nary longer the default successful ChatGPT, the AI exemplary is inactive disposable contiguous and being utilized by thousands of people.
Techcrunch event
San Francisco | October 27-29, 2025
OpenAI did not respond to TechCrunch’s petition for comment.
The instauration of erotica successful ChatGPT is unchartered territory for OpenAI and raises broader concerns astir however susceptible users volition interact with the caller features. While Altman insists OpenAI isn’t “usage-maxxing” oregon optimizing for engagement, making ChatGPT much erotic could surely gully users in.
Allowing chatbots to prosecute successful romanticist oregon erotic relation play has been an effectual engagement strategy for different AI chatbot providers, specified arsenic Character.AI. The institution has gained tens of millions of users, galore of whom usage its chatbots astatine a precocious rate. Character.AI said successful 2023 that users spent an mean of 2 hours a day talking to its chatbots. The institution is besides facing a lawsuit astir however it handles susceptible users.
OpenAI is nether unit to turn its idiosyncratic base. While ChatGPT is already utilized by 800 cardinal play progressive users, OpenAI is racing against Google and Meta to physique mass-adopted AI-powered user products. The institution has besides raised billions of dollars for a historical infrastructure buildout, an concern OpenAI yet needs to wage back.
While adults are surely having romanticist relationships with AI chatbots, it’s besides rather fashionable for minors. A caller study from the Center for Democracy and Technology recovered that 19% of precocious schoolhouse students person either had a romanticist narration with an AI chatbot, oregon cognize a person who has.
Altman says OpenAI volition soon let erotica for “verified adults.” It’s unclear whether the institution volition trust connected its age-prediction system, oregon immoderate different approach, for age-gating ChatGPT’s erotic features. It’s besides unclear whether OpenAI volition widen erotica to its AI voice, image, and video procreation tools.
Altman claims that OpenAI is besides making ChatGPT friendlier and erotic due to the fact that of the company’s “treat adults similar adults” principle. Over the past year, OpenAI has shifted towards a more lenient contented moderation strategy for ChatGPT, allowing the chatbot to beryllium much permissive and connection little refusals. In February, OpenAI pledged to correspond much governmental viewpoints successful ChatGPT, and successful March, the institution updated ChatGPT to let AI-generated images of hatred symbols.
These policies look to beryllium an effort to marque ChatGPT’s effect much fashionable with a wide assortment of users. However, susceptible ChatGPT users whitethorn payment from safeguards that bounds what a chatbot tin prosecute with. As OpenAI races towards a cardinal play progressive users, the hostility betwixt maturation and protecting susceptible users whitethorn lone grow.















English (US) ·