In March 2026, San Francisco hosted a pivotal event for the cybersecurity sector as the RSA Conference drew professionals from around the globe. The spotlight was on Agentic AI, a concept pushing AI beyond a mere tool to an autonomous actor capable of independent operations.
The Rise of Autonomous AI in Cybersecurity
The cybersecurity industry is witnessing a transformation as AI systems like Mythos begin to perform complex tasks independently. This shift introduces both potential benefits and risks. The Cloud Security Alliance foresees a rise in AI-driven cyberattacks, prompting a call to counteract AI threats with AI-driven defenses. OpenAI has expanded its Trusted Access for Cyber initiative to bolster thousands of security teams, and Gartner projects AI investments to surge by 44% in 2026, reaching $47 trillion by 2029—outpacing other security solutions significantly.
Dual-Use Nature of Agentic AI
Technologies such as Mythos illustrate the dual-use nature of AI, benefiting both defenders and attackers. Adversaries are leveraging AI for tasks like autonomous reconnaissance and real-time adaptation, enabling efficient, low-cost attacks with minimal human input. These advancements are no longer theoretical as rogue AI agents exploit vulnerabilities and impersonate legitimate users, reducing the need for direct control.
The proliferation of point solutions characterizes each major shift in cybersecurity, often leading to fragmented and complex environments. The current AI risks follow a similar trajectory. While tools such as AI security posture management and anomaly detection engines provide value, they also contribute to operational friction. Organizations require comprehensive context and control over all entities, human or AI, within their systems.
Identity-Based Security for AI
At the AGC Cybersecurity Investor Conference, experts suggested a more pragmatic approach: treating AI as an identity. This perspective integrates AI within established identity security frameworks, rather than requiring additional security layers. Recognizing AI’s behavior as akin to an identity—authenticating, accessing data, and executing actions—simplifies the security strategy.
By leveraging identity threat detection and risk mitigation, organizations can maintain a unified security approach. This entails adaptive verification, behavioral analytics, and risk scoring, enabling the detection of anomalies and enforcement of policies, applicable to both human and machine agents.
As autonomous AI agents, whether compromised or malicious, emerge, applying identity-driven security measures offers a practical defense. This strategy supports least privilege enforcement, continuous access validation, and automated responses, extending existing identity security frameworks to AI without adding complexity.
Conclusion: Embracing AI as Identifiable Entities
The discussions in San Francisco underscored a significant future trend in cybersecurity: the rise of independently acting entities, many of which will be AI. As AI technologies like Mythos advance, cybersecurity strategies must adapt. The simplest and most effective defense may involve treating AI as an identity. By embedding AI security within identity threat detection frameworks, organizations can shield themselves against rogue agents without complicating their defense systems.
