Significant security vulnerabilities have been identified in the popular AI frameworks LangChain and LangGraph, posing risks to sensitive data such as filesystem information, environment secrets, and user conversation histories. These frameworks, integral to building applications powered by Large Language Models (LLMs), are extensively downloaded, with over 52 million, 23 million, and 9 million downloads respectively last week, according to Python Package Index (PyPI) statistics.
Understanding the Vulnerabilities
A cybersecurity report from Cyera highlights three key vulnerabilities that offer potential for data breaches in enterprise deployments of LangChain. Vladimir Tokarev, a researcher at Cyera, detailed how each flaw targets a specific data type, including filesystem files and environment secrets.
The first vulnerability, identified as CVE-2026-34070 with a CVSS score of 7.5, involves path traversal within LangChain, allowing unauthorized access to arbitrary files via a crafted prompt template. The second, CVE-2025-68664, carries a high severity score of 9.3 and involves deserialization of untrusted data, leading to leakage of API keys and environment secrets. The third, CVE-2025-67644, scores 7.3 and pertains to an SQL injection flaw within LangGraph’s SQLite checkpoint implementation, enabling manipulation of SQL queries.
Potential Impact and Exploitation
Should these flaws be exploited, attackers could potentially access sensitive files, extract confidential secrets through prompt injections, and retrieve conversation histories linked to sensitive workflows. Notably, the deserialization vulnerability, also dubbed LangGrinch, was previously highlighted by Cyata in late 2025.
Recent patches have been issued to address these vulnerabilities: LangChain-Core version 1.2.22, LangChain-Core versions 0.3.81 and 1.2.5, and LangGraph-Checkpoint-SQLite version 3.0.1. These updates are crucial to safeguarding systems from these risks.
Broader Security Implications
These findings underscore the persistent security challenges within AI infrastructures, which remain susceptible to traditional vulnerabilities. This situation echoes the recent critical flaw in Langflow, designated CVE-2026-33017, which saw active exploitation shortly after its disclosure. Naveen Sunkavally from Horizon3.ai noted the similarity in root causes between this and previous vulnerabilities, emphasizing the urgency of applying security patches.
Given the interconnected nature of AI frameworks, where LangChain forms a central component of a vast dependency network, vulnerabilities within its core can have cascading effects across numerous libraries and integrations. Addressing these vulnerabilities promptly is imperative to minimize risks and protect data integrity.
In conclusion, the rapid pace of threat exploitation highlights the need for immediate action. Users of these frameworks are strongly advised to implement the recommended patches to ensure optimal protection against potential data breaches.
