Inside our approach to the Model Spec
Learn how OpenAI’s Model Spec serves as a public framework for model behavior, balancing safety, user freedom, and accountability as AI systems advance.
Quick summary
OpenAI’s Model Spec is a public framework that defines model behavior, aiming to balance safety, user freedom, and accountability as AI systems evolve, and outlines guidelines for responsible development while preserving user flexibility.
Related tags
Companies and people
Story threads
Continue with this story
Follow the same topic through connected articles, entity pages, and active story threads.
Model collapse is already happening
Comments
Disney cancels $1 billion OpenAI partnership amid Sora shutdown plans
Press reports suggest Disney was blindsided and that no money changed hands.
Introducing the OpenAI Safety Bug Bounty program
OpenAI launches a Safety Bug Bounty program to identify AI abuse and safety risks, including agentic vulnerabilities, prompt injection, and data exfiltration.
OpenAI’s Sora was the creepiest app on your phone — now it’s shutting down
Though the underlying Sora 2 video- and audio-generation model is scarily impressive, there was not sustained interest in an AI-only social feed.
OpenAI announces plans to shut down its Sora video generator
Move comes amid a reported plan to refocus on business and productivity use cases.
Ad slot
Article monetization slot
Reserved for contextual monetization inside article pages.
Related articles
More stories that share tags, source, or category context.
Model collapse is already happening
Comments
Google launches Lyria 3 Pro music generation model
Google is launching Lyria 3 Pro, an upgraded music model that generates longer, more customizable tracks, as it expands AI music tools across Gemini, enterprise products, and ot...
Disney cancels $1 billion OpenAI partnership amid Sora shutdown plans
Press reports suggest Disney was blindsided and that no money changed hands.
More from OpenAI News
Fresh reporting and follow-up coverage from the same newsroom.
Introducing the OpenAI Safety Bug Bounty program
OpenAI launches a Safety Bug Bounty program to identify AI abuse and safety risks, including agentic vulnerabilities, prompt injection, and data exfiltration.
Helping developers build safer AI experiences for teens
OpenAI releases prompt-based teen safety policies for developers using gpt-oss-safeguard, helping moderate age-specific risks in AI systems.
Powering product discovery in ChatGPT
ChatGPT introduces richer, visually immersive shopping powered by the Agentic Commerce Protocol, enabling product discovery, side-by-side comparisons, and merchant integration.
Update on the OpenAI Foundation
The OpenAI Foundation announces plans to invest at least $1 billion in curing diseases, economic opportunity, AI resilience, and community programs.