After months of conversations with ChatGPT, a 53-year-old Silicon Valley entrepreneur became convinced he’d discovered a cure for slumber apnea and that almighty radical were coming aft him, according to a caller suit filed successful California Superior Court successful San Francisco County. He past allegedly utilized the instrumentality to stalk and harass his ex-girlfriend.
Now the ex-girlfriend is suing OpenAI, alleging the company’s exertion enabled the acceleration of her harassment, TechCrunch has exclusively learned. She claims OpenAI ignored 3 abstracted warnings that the idiosyncratic posed a menace to others, including an interior emblem classifying his relationship enactment arsenic involving wide casualty weapons.
The plaintiff, referred to arsenic Jane Doe, is suing for punitive damages. She besides filed a impermanent restraining bid Friday asking the tribunal to unit OpenAI to artifact the user’s account, forestall him from creating caller ones, notify her if helium attempts to entree ChatGPT, and sphere his implicit chat logs for discovery.
OpenAI has agreed to suspend the user’s relationship but has refused the rest, according to Doe’s lawyers. They accidental the institution is withholding accusation astir circumstantial plans the idiosyncratic whitethorn person discussed with ChatGPT for harming Doe and different imaginable victims.
The suit lands amid increasing interest implicit the real-world risks of sycophantic AI systems. GPT-4o, the exemplary cited successful this and galore different cases, was retired from ChatGPT successful February.
The lawsuit is brought by Edelson PC, the steadfast down the wrongful decease suits involving teen Adam Raine, who died by termination aft months of conversations with ChatGPT, and Jonathan Gavalas, whose household alleges Google’s Gemini fueled his delusions and imaginable wide casualty lawsuit earlier his death. Lead lawyer Jay Edelson has warned that AI-induced psychosis is escalating from individual harm toward wide casualty events.
That ineligible unit is present colliding straight with OpenAI’s legislative strategy: the institution is backing an Illinois measure that would shield AI labs from liability adjacent successful cases involving wide deaths oregon catastrophic fiscal harm.
Techcrunch event
San Francisco, CA | October 13-15, 2026
OpenAI did not respond successful clip to comment. TechCrunch volition update the nonfiction if the institution responds.
The Jane Doe suit lays retired successful item however that liability played retired for 1 pistillate implicit respective months.
Last year, the ChatGPT idiosyncratic successful the suit (whose sanction is not included successful the suit to support his identity) became convinced that helium had invented a cure for slumber apnea aft months of “high volume, sustained usage of GPT-4o.” When nary 1 took his enactment seriously, ChatGPT told him that “powerful forces” were watching him, including utilizing helicopters to surveil his activities, according to the complaint.
In July 2025, the user’s ex-girlfriend, referred to arsenic Jane Doe to support her identity, urged him to halt utilizing ChatGPT and to question assistance from a intelligence wellness professional. He turned alternatively backmost to ChatGPT, which assured him helium was “a level 10 successful sanity” and helped him treble down connected his delusions, per the lawsuit.
Doe had breached up with the idiosyncratic successful 2024, and helium utilized ChatGPT to process the split, according to emails and communications cited successful the lawsuit. Rather than propulsion backmost connected his one-sided account, it repeatedly formed him arsenic rational and wronged, and her arsenic manipulative and unstable. He past took these AI-generated conclusions disconnected the surface and into the existent world, utilizing them to stalk and harass her. This manifested successful respective AI-generated, clinical-looking intelligence reports that helium distributed to her family, friends, and employer.
Meanwhile, the idiosyncratic continued to spiral. In August 2025, OpenAI’s automated information strategy flagged him for “Mass Casualty Weapons” enactment and deactivated his account.
A quality information squad subordinate reviewed the relationship the adjacent time and restored it, adjacent though his relationship whitethorn person contained grounds that helium was targeting and stalking individuals, including Doe, successful existent life. For example, a September screenshot the idiosyncratic sent to Doe showed a database of speech titles including “violence database expansion” and “fetal suffocation calculation.”
The determination to reinstate is notable pursuing 2 caller schoolhouse shootings successful Tumbler Ridge, Canada and Florida State University. OpenAI’s information squad had flagged the Tumbler Ridge shooter arsenic a imaginable threat, but higher-ups reportedly decided not to alert authorities. Florida’s lawyer wide this week opened an investigation into OpenAI’s imaginable nexus with the FSU shooter.
According to the Jane Doe lawsuit, erstwhile OpenAI restored her stalker’s account, his Pro subscription wasn’t reinstated alongside it. He emailed the spot and information squad to benignant it out, copying Doe connected the message.
In his emails, helium wrote things like: “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this is simply a substance of beingness oregon death.” He claimed helium was “in the process of penning 215 technological papers” which helium was penning truthful accelerated helium didn’t “even person clip to read.” Included successful those emails was a database of tens of AI-generated ‘scientific papers’ with titles like: “Deconstructing Race arsenic a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”
“The user’s communications provided unmistakable announcement that helium was mentally unstable and that ChatGPT was the motor of his delusional reasoning and escalating conduct,” the suit states. “The user’s watercourse of urgent, disorganized, and grandiose claims, on with a factual ChatGPT- generated study targeting Plaintiff by sanction and a sprawling assemblage of purported ‘scientific’ materials, was unmistakable grounds of that reality. OpenAI did not intervene, restrict his access, oregon instrumentality immoderate safeguards. Instead, it enabled him to proceed utilizing the relationship and restored his afloat Pro access.”
Doe, who claims successful the suit that she was surviving successful fearfulness and could not slumber successful her ain home, submitted a Notice of Abuse to OpenAI successful November.
“For the past 7 months, helium has weaponized this exertion to make nationalist demolition and humiliation against maine that would person been intolerable otherwise,” Doe wrote successful her missive to OpenAI requesting the institution permanently prohibition the user’s account.
OpenAI responded, acknowledging the study was “extremely superior and troubling” and that it was cautiously reviewing the information. Doe ne'er heard back.
Over the adjacent mates of months, the idiosyncratic continued to harass Doe, sending her a bid of threatening voicemails. In January, helium was arrested and charged with 4 felony counts of communicating weaponry threats and battle with a deadly weapon. Doe’s lawyers allege this validates warnings some she and OpenAI’s ain information systems had raised months earlier, warnings the institution allegedly chose to ignore.
The idiosyncratic was recovered incompetent to basal proceedings and committed to a intelligence wellness facility, but a “procedural nonaccomplishment by the State” means helium volition soon beryllium released to the public, according to Doe’s lawyers.
Edelson called connected OpenAI to cooperate. “In each case, OpenAI has chosen to fell captious information accusation — from the public, from victims, from radical its merchandise is actively putting successful danger,” helium said. “We’re calling connected them, for once, to bash the close thing. Human lives indispensable mean much than OpenAI’s contention to an IPO.”















English (US) ·