Jul 16, 2025Ravie LakshmananAI Safety / Vulnerability
Google on Tuesday revealed that its giant language mannequin (LLM)-assisted vulnerability discovery framework found a safety flaw within the SQLite open-source database engine earlier than it might have been exploited within the wild.
The vulnerability, tracked as CVE-2025-6965 (CVSS rating: 7.2), is a reminiscence corruption flaw affecting all variations prior to three.50.2. It was found by Large Sleep, a synthetic intelligence (AI) agent that was launched by Google final 12 months as a part of a collaboration between DeepMind and Google Undertaking Zero.
“An attacker who can inject arbitrary SQL statements into an software may be capable of trigger an integer overflow leading to learn off the top of an array,” SQLite challenge maintainers stated in an advisory.
The tech big described CVE-2025-6965 as a essential safety concern that was “recognized solely to menace actors and was susceptible to being exploited.” Google didn’t reveal who the menace actors have been.
“By way of the mix of menace intelligence and Large Sleep, Google was in a position to really predict {that a} vulnerability was imminently going for use and we have been in a position to minimize it off beforehand,” Kent Walker, President of World Affairs at Google and Alphabet, stated.
“We imagine that is the primary time an AI agent has been used to immediately foil efforts to take advantage of a vulnerability within the wild.”
In October 2024, Large Sleep was behind the invention of one other flaw in SQLite, a stack buffer underflow vulnerability that would have been exploited to end in a crash or arbitrary code execution.
Coinciding with the event, Google has additionally printed a white paper to construct safe AI brokers such that they’ve well-defined human controllers, their capabilities are fastidiously restricted to keep away from potential rogue actions and delicate information disclosure, and their actions are observable and clear.
“Conventional techniques safety approaches (equivalent to restrictions on agent actions carried out by classical software program) lack the contextual consciousness wanted for versatile brokers and may overly prohibit utility,” Google’s Santiago (Sal) Díaz, Christoph Kern, and Kara Olive stated.
“Conversely, purely reasoning-based safety (relying solely on the AI mannequin’s judgment) is inadequate as a result of present LLMs stay vulnerable to manipulations like immediate injection and can’t but provide sufficiently strong ensures.”
To mitigate the important thing dangers related to agent safety, the corporate stated it has adopted a hybrid defense-in-depth strategy that mixes the strengths of each conventional, deterministic controls and dynamic, reasoning-based defenses.
The thought is to create strong boundaries across the agent’s operational atmosphere in order that the chance of dangerous outcomes is considerably mitigated, particularly malicious actions carried out because of immediate injection.
“This defense-in-depth strategy depends on enforced boundaries across the AI agent’s operational atmosphere to stop potential worst-case eventualities, performing as guardrails even when the agent’s inside reasoning course of turns into compromised or misaligned by refined assaults or surprising inputs,” Google stated.
“This multi-layered strategy acknowledges that neither purely rule-based techniques nor purely AI-based judgment are ample on their very own.”
Discovered this text attention-grabbing? Observe us on Twitter and LinkedIn to learn extra unique content material we submit.