Image Credits:Silas Stein/picture confederation / Getty Images11:41 AM PDT · March 24, 2026
OpenAI said Tuesday it is releasing a acceptable of prompts that developers tin usage to marque their apps safer for teens. The AI laboratory said the acceptable of teen information policies tin beryllium utilized with its open-weight information exemplary known arsenic gpt-oss-safeguard.
Rather than moving from scratch to fig retired however to marque AI safer for teens, developers tin usage these prompts to fortify what they build. They code issues similar graphic unit and intersexual content, harmful assemblage ideals and behaviors, unsafe activities and challenges, romanticist oregon convulsive relation play, and age-restricted goods and services.
These information policies are designed arsenic prompts, making them easy compatible with different models too gpt-oss-safeguard, though they’re astir apt astir effectual wrong OpenAI’s ain ecosystem.
To constitute these prompts, OpenAI said it worked with AI information watchdogs, Common Sense Media and everyone.ai.
“These prompt-based policies assistance acceptable a meaningful information level crossed the ecosystem, and due to the fact that they’re released arsenic unfastened source, they tin beryllium adapted and improved implicit time,” said Robbie Torney, Head of AI & Digital Assessments astatine Common Sense Media, successful a statement.
OpenAI noted successful its blog that developers, including experienced teams, often conflict to construe information goals into precise, operational rules.
“This tin pb to gaps successful protection, inconsistent enforcement, oregon overly wide filtering,” the institution wrote. “Clear, well-scoped policies are a captious instauration for effectual information systems.”
Techcrunch event
San Francisco, CA | October 13-15, 2026
OpenAI admits that these policies aren’t a solution to the analyzable challenges of AI safety. But it builds disconnected its erstwhile efforts, including product-level safeguards specified arsenic parental controls and property prediction. Last year, OpenAI updated guidelines for its ample connection models — known arsenic Model Spec — to tackle however its AI models should behave with users nether 18.
OpenAI doesn’t person the cleanest way grounds itself, however. The institution is facing several lawsuits filed by the families of radical who died by termination aft utmost ChatGPT use. These unsafe relationships often signifier aft the idiosyncratic eclipses the chatbot’s safeguards, and nary model’s guardrails are afloat impenetrable. Still, these policies are astatine slightest a measurement forward, particularly since it tin assistance indie developers.
Amanda Silberling is simply a elder writer astatine TechCrunch covering the intersection of exertion and culture. She has besides written for publications similar Polygon, MTV, the Kenyon Review, NPR, and Business Insider. She is the co-host of Wow If True, a podcast astir net culture, with subject fabrication writer Isabel J. Kim. Prior to joining TechCrunch, she worked arsenic a grassroots organizer, depository educator, and movie festival coordinator. She holds a B.A. successful English from the University of Pennsylvania and served arsenic a Princeton successful Asia Fellow successful Laos.
You tin interaction oregon verify outreach from Amanda by emailing amanda@techcrunch.com or via encrypted connection astatine @amanda.100 connected Signal.















English (US) ·