Google says its Large Sleep AI agent just lately found a important SQLite vulnerability and thwarted risk actors’ efforts to use it within the wild.
Large Sleep, an AI agent developed by Google’s DeepMind and Mission Zero groups, is designed to actively seek for unknown vulnerabilities in software program.
Google claimed in November 2024 that Large Sleep had managed to search out its first real-world vulnerability, an exploitable buffer overflow within the extensively used open supply database engine SQLite.
The tech large stated on the time that its researchers had tried to search out the identical vulnerability utilizing fuzzing, however they failed to perform the duty.
Within the case of that SQLite vulnerability, it was found in a model of the software program that had but to be launched, which means that customers weren’t in danger.
Nonetheless, in a weblog put up printed on Tuesday, Google stated Large Sleep just lately found one other SQLite vulnerability that was “solely recognized to risk actors and was susceptible to being exploited”.
The vulnerability, tracked as CVE-2025-6965, has been described as a difficulty associated to the truth that the variety of combination phrases might exceed the variety of out there columns, resulting in reminiscence corruption. The vulnerability was patched in late June with the discharge of model 3.50.2.
No different particulars can be found, however reminiscence corruption vulnerabilities can usually result in arbitrary code execution, privilege escalation, information leakage, or denial of service. Commercial. Scroll to proceed studying.
“By means of the mixture of risk intelligence and Large Sleep, Google was capable of really predict {that a} vulnerability was imminently going for use and we had been capable of minimize it off beforehand,” Google stated. “We imagine that is the primary time an AI agent has been used to instantly foil efforts to use a vulnerability within the wild.”
Be taught extra about AI-powered safety options at
SecurityWeek’s 2025 AI Danger Summit
SecurityWeek has requested Google to share extra technical particulars, however the firm has refused to take action.
It’s unclear what data had been given to Large Sleep by risk intelligence specialists, and the way the corporate decided that the vulnerability was susceptible to being exploited.
Probably important SQLite vulnerabilities have come to mild each every so often, however there don’t look like any studies describing the in-the-wild exploitation of such flaws.
For example, CISA’s Identified Exploited Vulnerabilities (KEV) catalog doesn’t embody any SQLite flaws, though the federal government company’s listing is understood to be incomplete.
Google additionally introduced on Tuesday that it’s donating information from its Safe AI Framework to the Coalition for Safe AI (CoSAI), an initiative aimed toward tackling the cybersecurity dangers related to AI. This can “assist speed up CoSAI’s agentic AI, cyber protection and software program provide chain safety workstreams”, the corporate stated.
Associated: Grok-4 Falls to a Jailbreak Two Days After Its Launch
Associated: Google Gemini Tricked Into Displaying Phishing Message Hidden in E mail