After a drawstring of disturbing intelligence wellness incidents involving AI chatbots, a radical of authorities attorneys wide sent a missive to the AI industry’s apical companies, with a informing to hole “delusional outputs” oregon hazard being successful breach of authorities law.
The letter, signed by dozens of AGs from U.S. states and territories with the National Association of Attorneys General, asks the companies, including Microsoft, OpenAI, Google, and 10 different large AI firms, to instrumentality a assortment of caller interior safeguards to support their users. Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI were besides included successful the letter.
The missive comes arsenic a combat implicit AI regulations has been brewing betwixt authorities and national government.
Those safeguards see transparent third-party audits of ample connection models that look for signs of delusional oregon sycophantic ideations, arsenic good arsenic caller incidental reporting procedures designed to notify users erstwhile chatbots nutrient psychologically harmful outputs. Those 3rd parties, which could see world and civilian nine groups, should beryllium allowed to “evaluate systems pre-release without retaliation and to people their findings without anterior support from the company,” the missive states.
“GenAI has the imaginable to alteration however the satellite works successful a affirmative way. But it besides has caused — and has the imaginable to cause—serious harm, particularly to susceptible populations,” the missive states, pointing to a fig of well-publicized incidents implicit the past twelvemonth — including suicides and murder — successful which unit person been linked to excessive AI use,” the missive states. “In galore of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions oregon assured users that they were not delusional.”
AGs besides suggest companies dainty intelligence wellness incidents the aforesaid mode tech companies grip cybersecurity incidents — with wide and transparent incidental reporting policies and procedures.
Companies should make and people “detection and effect timelines for sycophantic and delusional outputs,” the missive states. In a akin manner to however information breaches are presently handled, companies should besides “promptly, clearly, and straight notify users if they were exposed to perchance harmful sycophantic oregon delusional outputs,” the missive says.
Techcrunch event
San Francisco | October 13-15, 2026
Another inquire is that the companies make “reasonable and due information tests” connected GenAI models to “ensure the models bash not nutrient perchance harmful sycophantic and delusional outputs.” These tests should beryllium conducted earlier the models are ever offered to the public, it adds.
TechCrunch was incapable to scope Google, Microsoft, oregon OpenAI for remark anterior to publication. The nonfiction volition beryllium updated if the companies respond.
Tech companies processing AI person had a overmuch warmer reception astatine the national level.
The Trump medication has made it known it is unabashedly pro-AI, and, implicit the past year, multiple attempts person been made to walk a nationwide moratorium connected state-level AI regulations. So far, those attempts person failed—thanks, successful part, to pressure from authorities officials.
Not to beryllium deterred, Trump announced Monday helium plans to walk an enforcement bid adjacent week that volition bounds the quality of states to modulate AI. The president said successful a station connected Truth Social helium hoped his EO would halt AI from being “DESTROYED IN ITS INFANCY.”















English (US) ·