In response to Anthropic’s recent unveiling of the Claude Mythos AI model, OpenAI has introduced GPT-5.4-Cyber. This new cybersecurity-centric model aims to equip numerous defenders with enhanced capabilities.
Scaling Access to Cybersecurity Tools
OpenAI has announced an expansion of its Trusted Access for Cyber program, aiming to provide thousands of verified defenders and hundreds of security teams with access to GPT-5.4-Cyber. This version of GPT-5.4 is specifically fine-tuned for cybersecurity applications, offering relaxed restrictions to facilitate legitimate defensive activities.
The new model introduces advanced features such as binary reverse engineering, allowing users to dissect compiled software to uncover vulnerabilities and detect malicious activities. Initially, GPT-5.4-Cyber is available on a limited basis to select security vendors, organizations, and researchers.
Access and Verification Process
Individual cybersecurity defenders interested in accessing GPT-5.4-Cyber through the Trusted Access for Cyber program can apply via chatgpt.com/cyber, following a thorough identity verification process. Enterprise teams, on the other hand, are required to coordinate through their OpenAI account representatives.
OpenAI’s approach is built on three core principles: democratized access, iterative deployment, and fostering ecosystem resilience. This strategy emphasizes broad availability through objective verification processes rather than selective gatekeeping, continuous improvement informed by real-world application, and support for the wider defender community through initiatives like Codex Security.
Comparing with Anthropic’s Approach
This announcement follows the release of Anthropic’s Claude Mythos, an AI model designed to autonomously detect zero-day vulnerabilities. Due to the potential risks, Anthropic has opted to restrict its availability to a few major organizations under Project Glasswing.
Both Anthropic and OpenAI focus on defensive applications while addressing dual-use risks. OpenAI, however, advocates for widespread access to defensive tools, grounded in verification and accountability, to empower as many legitimate defenders as possible.
Although OpenAI has not disclosed specific performance metrics for GPT-5.4-Cyber, its Codex Security platform has already proven effective, identifying over 3,000 critical vulnerabilities in the open source ecosystem. This highlights the potential impact of deploying advanced AI tools in cybersecurity.
As the landscape of AI-driven cybersecurity evolves, the emphasis remains on balancing accessibility with ethical and safe usage, ensuring that technological advancements benefit the broader community of defenders.
