Helping developers build safer AI experiences for teens
OpenAI releases prompt-based teen safety policies for developers using gpt-oss-safeguard, helping moderate age-specific risks in AI systems.
Quick summary
OpenAI has launched prompt‑based teen safety policies that developers can apply through its gpt‑oss‑safeguard tool, enabling more effective moderation of age‑specific risks in AI applications for teenage users.
Related tags
Companies and people
Story threads
Continue with this story
Follow the same topic through connected articles, entity pages, and active story threads.
Mercor competitor Deccan AI raises $25M, sources experts from India
Deccan AI concentrates its workforce in India to manage quality in a fast-growing but fragmented AI training market.
Delve did the security compliance on LiteLLM, an AI project hit by malware
LiteLLM offers an AI open source project used by millions that was infected by credential harvesting malware.
The AI skills gap is here, says AI company, and power users are pulling ahead
Anthropic finds AI isn’t replacing jobs yet, but early data shows growing inequality as experienced users gain an edge, raising concerns about future displacement and workforce ...
Granola raises $125M, hits $1.5B valuation as it expands from meeting notetaker to enterprise AI app
Granola's valuation jumped from $250 million to $1.5 billion with this round, and it has added more support for AI agents after users previously complained.
Meta turns to AI to make shopping easier on Instagram and Facebook
Meta is using generative AI to provide more product and brand information to consumers when they're shopping in its apps.
Ad slot
Article monetization slot
Reserved for contextual monetization inside article pages.
Related articles
More stories that share tags, source, or category context.
Mercor competitor Deccan AI raises $25M, sources experts from India
Deccan AI concentrates its workforce in India to manage quality in a fast-growing but fragmented AI training market.
Delve did the security compliance on LiteLLM, an AI project hit by malware
LiteLLM offers an AI open source project used by millions that was infected by credential harvesting malware.
The AI skills gap is here, says AI company, and power users are pulling ahead
Anthropic finds AI isn’t replacing jobs yet, but early data shows growing inequality as experienced users gain an edge, raising concerns about future displacement and workforce ...
Granola raises $125M, hits $1.5B valuation as it expands from meeting notetaker to enterprise AI app
Granola's valuation jumped from $250 million to $1.5 billion with this round, and it has added more support for AI agents after users previously complained.
More from OpenAI News
Fresh reporting and follow-up coverage from the same newsroom.
Inside our approach to the Model Spec
Learn how OpenAI’s Model Spec serves as a public framework for model behavior, balancing safety, user freedom, and accountability as AI systems advance.
Introducing the OpenAI Safety Bug Bounty program
OpenAI launches a Safety Bug Bounty program to identify AI abuse and safety risks, including agentic vulnerabilities, prompt injection, and data exfiltration.
Powering product discovery in ChatGPT
ChatGPT introduces richer, visually immersive shopping powered by the Agentic Commerce Protocol, enabling product discovery, side-by-side comparisons, and merchant integration.
Update on the OpenAI Foundation
The OpenAI Foundation announces plans to invest at least $1 billion in curing diseases, economic opportunity, AI resilience, and community programs.