Image Credits:Jakub Porzycki/NurPhoto / Getty Images7:08 AM PST · December 28, 2025
OpenAI is looking to prosecute a caller enforcement liable for studying emerging AI-related risks successful areas ranging from machine information to intelligence health.
In a station connected X, CEO Sam Altman acknowledged that AI models are “starting to contiguous immoderate existent challenges,” including the “potential interaction of models connected intelligence health,” arsenic good arsenic models that are “so bully astatine machine information they are opening to find captious vulnerabilities.”
“If you privation to assistance the satellite fig retired however to alteration cybersecurity defenders with cutting borderline capabilities portion ensuring attackers can’t usage them for harm, ideally by making each systems much secure, and likewise for however we merchandise biologic capabilities and adjacent summation assurance successful the information of moving systems that tin self-improve, delight see applying,” Altman wrote.
OpenAI’s listing for the Head of Preparedness role describes the occupation arsenic 1 that’s liable for executing the company’s preparedness framework, “our model explaining OpenAI’s attack to tracking and preparing for frontier capabilities that make caller risks of terrible harm.”
The institution archetypal announced the instauration of a preparedness team successful 2023, saying it would beryllium liable for studying imaginable “catastrophic risks,” whether they were much immediate, similar phishing attacks, oregon much speculative, specified arsenic atomic threats.
Less than a twelvemonth later, OpenAI reassigned Head of Preparedness Aleksander Madry to a occupation focused connected AI reasoning. Other information executives astatine OpenAI person besides left the company oregon taken connected caller roles extracurricular of preparedness and safety.
The institution besides precocious updated its Preparedness Framework, stating that it mightiness “adjust” its information requirements if a competing AI laboratory releases a “high-risk” exemplary without akin protections.
Techcrunch event
San Francisco | October 13-15, 2026
As Altman alluded to successful his post, generative AI chatbots person faced increasing scrutiny astir their interaction connected intelligence health. Recent lawsuits allege that OpenAI’s ChatGPT reinforced users’ delusions, accrued their societal isolation, and adjacent led immoderate to suicide. (The institution said it continues moving to amended ChatGPT’s quality to admit signs of affectional distress and to link users to real-world support.)
Anthony Ha is TechCrunch’s play editor. Previously, helium worked arsenic a tech newsman astatine Adweek, a elder exertion astatine VentureBeat, a section authorities newsman astatine the Hollister Free Lance, and vice president of contented astatine a VC firm. He lives successful New York City.
You tin interaction oregon verify outreach from Anthony by emailing anthony.ha@techcrunch.com.















English (US) ·