OpenAI began investigating a new safety routing system in ChatGPT over the weekend, and on Monday introduced parental controls to the chatbot – drawing mixed reactions from users.
The information features travel successful response to numerous incidents of certain ChatGPT models validating users’ delusional thinking instead of redirecting harmful conversations. OpenAI is facing a wrongful decease lawsuit tied to 1 specified incident, after a teenage lad died by termination aft months of interactions with ChatGPT.
The routing strategy is designed to detect emotionally delicate conversations and automatically power mid-chat to GPT-5-thinking, which the institution sees arsenic the champion equipped exemplary for high-stakes information work. In particular, the GPT-5 models were trained with a caller information diagnostic that OpenAI calls “safe completions,” which allows them to reply delicate questions successful a harmless way, alternatively than simply refusing to engage.
It’s a opposition from the company’s erstwhile chat models, which are designed to beryllium agreeable and reply questions quickly. GPT-4o has travel nether peculiar scrutiny due to the fact that of its overly sycophantic, agreeable nature, which has some fueled incidents of AI-induced delusions and drawn a ample basal of devoted users. When OpenAI rolled out GPT-5 arsenic the default in August, galore users pushed back and demanded entree to GPT-4o.
While galore experts and users person welcomed the information features, others have criticized what they spot arsenic an overly cautious implementation, with immoderate users accusing OpenAI of treating adults similar children successful a mode that degrades the prime of the service. OpenAI has suggested that getting it close volition take time and has fixed itself a 120-day play of iteration and improvement.
Nick Turley, VP and caput of the ChatGPT app, acknowledged immoderate of the “strong reactions to 4o responses” owed to the implementation of the router with explanations.
“Routing happens connected a per-message basis; switching from the default exemplary happens connected a impermanent basis,” Turley posted connected X. “ChatGPT volition archer you which exemplary is progressive erstwhile asked. This is portion of a broader effort to fortify safeguards and larn from real-world usage earlier a wider rollout.”
Techcrunch event
San Francisco | October 27-29, 2025
The implementation of parental controls successful ChatGPT received similar levels of praise and scorn, with immoderate commending giving parents a mode to support tabs connected their childrens’ AI use, and others fearful that it opens the doorway to OpenAI treating adults similar children.
The controls fto parents customize their teen’s acquisition by mounting quiescent hours, turning disconnected dependable mode and memory, removing representation generation, and opting retired of exemplary training. Teen accounts volition besides get additional content protections – similar reduced graphic contented and utmost quality ideals – and a detection strategy that recognizes imaginable signs that a teen mightiness beryllium reasoning astir self-harm.
“If our systems observe imaginable harm, a tiny squad of specially trained radical reviews the situation,” per OpenAI’s blog. “If determination are signs of acute distress, we volition interaction parents by email, text message and propulsion alert on their phone, unless they person opted out.”
OpenAI acknowledged that the system won’t be cleanable and whitethorn sometimes rise alarms erstwhile there isn’t real danger, “but we deliberation it’s amended to enactment and alert a genitor truthful they tin measurement successful than to enactment silent.” The AI steadfast said it is besides moving connected ways to scope instrumentality enforcement oregon exigency services if it detects an imminent menace to beingness and cannot reach a parent.















English (US) ·