OpenAI Releases GPT-5.4-Cyber for Vetted Security Teams, Scaling Trusted Access Program
April 15, 2026 - 4:22 pm
In short: OpenAI is releasing GPT-5.4-Cyber, a model fine-tuned for defensive cybersecurity with lowered refusal boundaries and binary reverse engineering capabilities, and expanding its Trusted Access for Cyber program to thousands of verified defenders. This move comes in response to Anthropic’s recent restrictions on its powerful Mythos model.
OpenAI Opens Up Its Most Capable Cybersecurity Model
OpenAI is opening up its most advanced cybersecurity model to a broader audience, specifically to thousands of vetted defenders. They are releasing GPT-5.4-Cyber, a variant of GPT-5.4 tailored for defensive security work.
Key Features:
- Lowered Refusal Boundaries: Unlike standard models that block sensitive queries about vulnerability research or malware behavior, GPT-5.4-Cyber is designed to answer these questions if the user is verified as a legitimate security professional.
- Binary Reverse Engineering Capabilities: The model allows analysts to examine compiled software for weaknesses without access to source code.
Expanding Trusted Access for Cyber Program
The model is integrated into OpenAI’s Trusted Access for Cyber (TAC) program, which was launched in February with a $10 million cybersecurity grant fund.
- Verification Tiers: TAC uses an identity-and-trust framework that gates access to more capable models based on verification levels.
- Updated Scaling: The latest update scales the program from a limited pilot to thousands of verified individual defenders and hundreds of teams responsible for defending critical software. New tiers with higher verification levels unlock powerful features, culminating in full access to GPT-5.4-Cyber for top-tier users.
- Zero-Data Retention: Top-tier users may be required to waive Zero-Data Retention, meaning OpenAI retains some visibility into model usage.
A Shift in Philosophy
OpenAI’s approach represents a shift from relying primarily on model-level restrictions to an access control model that verifies users before deciding what the model will answer. This is based on three principles:
- Democratized Access: Using objective verification criteria to ensure equitable access.
- Iterative Deployment: Updating safety systems as risks emerge.
- Ecosystem Resilience: Through grants and open-source contributions.
In Relation to Anthropic’s Project Glasswing
This move by OpenAI cannot be detached from Anthropic’s recent announcement of Project Glasswing, which restricted access to its powerful Mythos model to just 11 organizations.