Character AI is ending its chatbot experience for kids

5 months ago 79

Teenagers are trying to fig retired wherever they acceptable successful a satellite changing faster than immoderate procreation earlier them. They’re bursting with emotions, hyper-stimulated, and chronically online. And now, AI companies person fixed them chatbots designed to ne'er halt talking. The results person been catastrophic.

One institution that understands this fallout is Character.AI, an AI role-playing startup that’s facing lawsuits and nationalist outcry aft astatine slightest two teenagers died by suicide pursuing prolonged conversations with AI chatbots connected its platform. Now, Character.AI is making changes to its level to support teenagers and kids, changes that could impact the startup’s bottommost line.

“The archetypal happening that we’ve decided arsenic Character.AI is that we volition region the quality for nether 18 users to prosecute successful immoderate open-ended chats with AI connected our platform,” Karandeep Anand, CEO of Character.AI, told TechCrunch.  

Open-ended speech refers to the unconstrained back-and-forth that happens erstwhile users springiness a chatbot a punctual and it responds with follow-up questions that experts say are designed to support users engaged. Anand argues this benignant of enactment — wherever the AI acts arsenic a conversational spouse oregon person alternatively than a originative instrumentality — isn’t conscionable risky for kids, but misaligns with the company’s vision.

The startup is attempting to pivot from “AI companion” to “role-playing platform.” Instead of chatting with an AI friend, teens volition usage prompts to collaboratively physique stories oregon make visuals. In different words, the extremity is to displacement engagement from speech to creation. 

Character.AI volition signifier retired teen chatbot entree by November 25, starting with a two-hour regular bounds that shrinks progressively until it hits zero. To guarantee this prohibition remains with nether 18 users, the level volition deploy an in-house property verification instrumentality that analyzes idiosyncratic behavior, arsenic good arsenic third-party tools similar Persona. If those tools fail, Character.AI volition usage facial designation and ID checks to verify ages, Anand said. 

The determination follows different teenager protections that Character.AI has implemented, including introducing a parental insights tool, filtered characters, constricted romanticist conversations, and clip spent notifications. Anand has told TechCrunch that those changes mislaid the institution overmuch of their under-18 idiosyncratic base, and helium expects these caller changes to beryllium arsenic unpopular.  

Techcrunch event

San Francisco | October 27-29, 2025

“It’s harmless to presume that a batch of our teen users astir apt volition beryllium disappointed… truthful we bash expect immoderate churn to hap further,” Anand said. “It’s hard to speculate — volition each of them afloat churn oregon volition immoderate of them determination to these caller experiences we’ve been gathering for the past astir 7 months now?” 

As portion of Character.AI’s propulsion to alteration the level from a chat-centric app into a “full-fledged content-driven societal platform,” the startup precocious launched respective caller entertainment-focused features.

In June, Character.AI rolled out AvatarFX, a video procreation exemplary that transforms images into animated videos; Scenes, an interactive, pre-populated storylines wherever users tin measurement into narratives with their favourite characters; and Streams, a diagnostic that allows dynamic interactions betwixt immoderate 2 characters. In August, Character.AI launched Community Feed, a societal provender wherever users tin stock their characters, scenes, videos, and different contented they marque connected the platform.  

In a connection addressed to users nether 18, Character.AI apologized for the changes. 

“We cognize that astir of you use Character.AI to supercharge your creativity successful ways that enactment wrong the bounds of our contented rules,” the connection reads. “We bash not instrumentality this measurement of removing open-ended Character chat lightly — but we bash deliberation that it’s the close happening to bash fixed the questions that person been raised astir however teens do, and should, interact with this caller technology.” 

“We’re not shutting down the app for nether 18s,” Anand said. “We are lone shutting down open-ended chats for nether 18s due to the fact that we anticipation that nether 18 users migrate to these different experiences, and that those experiences get amended implicit time. So doubling down connected AI gaming, AI abbreviated videos, AI storytelling successful general. That’s the large stake we’re making to bring backmost nether 18s if they bash churn.” 

Anand acknowledged that immoderate teens mightiness flock to different AI platforms, similar OpenAI, that let them to person open-ended conversations with chatbots. OpenAI has besides travel nether occurrence precocious aft a teenager took his ain life pursuing agelong conversations with ChatGPT.  

“I truly anticipation america starring the mode sets a modular successful the manufacture that for nether 18s, open-ended chats are astir apt not the way oregon the merchandise to offer,” Anand said. “For us, I deliberation the tradeoffs are the close ones to make. I person a six-year-old, and I privation to marque definite she grows up successful a precise harmless situation with AI successful a liable way.” 

Character.AI is making these decisions earlier regulators unit its hand. On Tuesday, Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) said they would introduce legislation to prohibition AI chatbot companions from being disposable to minors, pursuing complaints from parents who said the products pushed their children into intersexual conversations, self-harm, and suicide. Earlier this month, California became the archetypal authorities to regulate AI companion chatbots by holding companies accountable if their chatbots neglect to conscionable the law’s information standards.  

In summation to those changes connected the platform, Character.AI said it would found and money the AI Safety Lab, an autarkic non-profit dedicated to innovating information alignment for the aboriginal AI amusement features.  

“A batch of enactment is happening successful the manufacture connected coding and improvement and different usage cases,” Anand said. “We don’t deliberation there’s capable enactment yet happening connected the agentic AI powering entertainment, and information volition beryllium precise captious to that.” 

Read Entire Article