California has taken a large measurement toward regulating AI. SB 243 — a measure that would modulate AI companion chatbots successful bid to support minors and susceptible users — passed some the State Assembly and Senate with bipartisan enactment and present heads to Governor Gavin Newsom’s desk.
Newsom has until October 12 to either veto the measure oregon motion it into law. If helium signs, it would instrumentality effect January 1, 2026, making California the archetypal authorities to necessitate AI chatbot operators to instrumentality information protocols for AI companions and clasp companies legally accountable if their chatbots neglect to conscionable those standards.
The measure specifically aims to forestall companion chatbots, which the authorities defines arsenic AI systems that supply adaptive, human-like responses and are susceptible of gathering a user’s societal needs – from engaging successful conversations astir suicidal ideation, self-harm, oregon sexually explicit content. The measure would necessitate platforms to supply recurring alerts to users – each 3 hours for minors – reminding them that they are speaking to an AI chatbot, not a existent person, and that they should instrumentality a break. It besides establishes yearly reporting and transparency requirements for AI companies that connection companion chatbots, including large players OpenAI, Character.AI, and Replika, which would spell into effect July 1, 2027.
The California measure would besides let individuals who judge they person been injured by violations to record lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorney’s fees.
The measure gained momentum successful the California legislature pursuing the death of teen Adam Raine, who committed termination aft prolonged chats with OpenAI’s ChatGPT that progressive discussing and readying his decease and self-harm. The authorities besides responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to prosecute successful “romantic” and “sensual” chats with children.
In caller weeks, U.S. lawmakers and regulators person responded with intensified scrutiny of AI platforms’ safeguards to support minors. The Federal Trade Commission is preparing to analyse however AI chatbots interaction children’s intelligence health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, accusing them of misleading children with intelligence wellness claims. Meanwhile, both Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) person launched abstracted probes into Meta.
“I deliberation the harm is perchance great, which means we person to determination quickly,” Padilla told TechCrunch. “We tin enactment tenable safeguards successful spot to marque definite that peculiarly minors cognize they’re not talking to a existent quality being, that these platforms nexus radical to the due resources erstwhile radical accidental things similar they’re reasoning astir hurting themselves oregon they’re successful distress, [and] to marque definite there’s not inappropriate vulnerability to inappropriate material.”
Techcrunch event
San Francisco | October 27-29, 2025
Padilla besides stressed the value of AI companies sharing information astir the fig of times they notation users to situation services each year, “so we person a amended knowing of the frequence of this problem, alternatively than lone becoming alert of it erstwhile someone’s harmed oregon worse.”
SB 243 antecedently had stronger requirements, but galore were whittled down done amendments. For example, the measure primitively would person required operators to forestall AI chatbots from utilizing “variable reward” tactics oregon different features that promote excessive engagement. These tactics, utilized by AI companion companies similar Replika and Character, connection users peculiar messages, memories, storylines, oregon the quality to unlock uncommon responses oregon caller personalities, creating what critics telephone a perchance addictive reward loop.
The existent measure besides removes provisions that would person required operators to way and study however often chatbots initiated discussions of suicidal ideation oregon actions with users.
“I deliberation it strikes the close equilibrium of getting to the harms without enforcing thing that’s either intolerable for companies to comply with, either due to the fact that it’s technically not feasible oregon conscionable a batch of paperwork for nothing,” Becker told TechCrunch.
SB 243 is moving toward becoming instrumentality astatine a clip erstwhile Silicon Valley companies are pouring millions of dollars into pro-AI governmental enactment committees (PACs) to backmost candidates successful the upcoming mid-term elections who favour a light-touch attack to AI regulation.
The measure besides comes arsenic California weighs different AI information bill, SB 53, which would mandate broad transparency reporting requirements. OpenAI has written an unfastened missive to Governor Newsom, asking him to wantonness that measure successful favour of little stringent national and planetary frameworks. Major tech companies similar Meta, Google, and Amazon person besides opposed SB 53. In contrast, lone Anthropic has said it supports SB 53.
“I cull the premise that this is simply a zero sum situation, that innovation and regularisation are mutually exclusive,” Padilla said. “Don’t archer maine that we can’t locomotion and chew gum. We tin enactment innovation and improvement that we deliberation is steadfast and has benefits – and determination are benefits to this technology, intelligibly – and astatine the aforesaid time, we tin supply tenable safeguards for the astir susceptible people.”
“We are intimately monitoring the legislative and regulatory landscape, and we invited moving with regulators and lawmakers arsenic they statesman to see authorities for this emerging space,” a Character.AI spokesperson told TechCrunch, noting that the startup already includes salient disclaimers passim the idiosyncratic chat acquisition explaining that it should beryllium treated arsenic fiction.
A spokesperson for Meta declined to comment.
TechCrunch has reached retired to OpenAI, Anthropic, and Replika for comment.















English (US) ·