Dec 26, 2025Ravie LakshmananAI Safety / DevSecOps
A crucial safety flaw has been disclosed in LangChain Core that might be exploited by an attacker to steal delicate secrets and techniques and even affect giant language mannequin (LLM) responses by immediate injection.
LangChain Core (i.e., langchain-core) is a core Python package deal that is a part of the LangChain ecosystem, offering the core interfaces and model-agnostic abstractions for constructing functions powered by LLMs.
The vulnerability, tracked as CVE-2025-68664, carries a CVSS rating of 9.3 out of 10.0. Safety researcher Yarden Porat has been credited with reporting the vulnerability on December 4, 2025. It has been codenamed LangGrinch.
“A serialization injection vulnerability exists in LangChain’s dumps() and dumpd() capabilities,” the mission maintainers mentioned in an advisory. “The capabilities don’t escape dictionaries with ‘lc’ keys when serializing free-form dictionaries.”
“The ‘lc’ secret is used internally by LangChain to mark serialized objects. When user-controlled knowledge accommodates this key construction, it’s handled as a official LangChain object throughout deserialization moderately than plain person knowledge.”
Based on Cyata researcher Porat, the crux of the issue has to do with the 2 capabilities failing to flee user-controlled dictionaries containing “lc” keys. The “lc” marker represents LangChain objects within the framework’s inside serialization format.
“So as soon as an attacker is ready to make a LangChain orchestration loop serialize and later deserialize content material together with an ‘lc’ key, they’d instantiate an unsafe arbitrary object, doubtlessly triggering many attacker-friendly paths,” Porat mentioned.
This might have varied outcomes, together with secret extraction from setting variables when deserialization is carried out with “secrets_from_env=True” (beforehand set by default), instantiating courses inside pre-approved trusted namespaces, equivalent to langchain_core, langchain, and langchain_community, and doubtlessly even resulting in arbitrary code execution through Jinja2 templates.
What’s extra, the escaping bug permits the injection of LangChain object buildings by user-controlled fields like metadata, additional_kwargs, or response_metadata through immediate injection.
The patch launched by LangChain introduces new restrictive defaults in load() and masses() by way of an allowlist parameter “allowed_objects” that permits customers to specify which courses may be serialized/deserialized. As well as, Jinja2 templates are blocked by default, and the “secrets_from_env” possibility is now set to “False” to disable automated secret loading from the setting.
The next variations of langchain-core are affected by CVE-2025-68664 –
>= 1.0.0, < 1.2.5 (Mounted in 1.2.5)
< 0.3.81 (Mounted in 0.3.81)
It is value noting that there exists an analogous serialization injection flaw in LangChain.js that additionally stems from not correctly escaping objects with “lc” keys, thereby enabling secret extraction and immediate injection. This vulnerability has been assigned the CVE identifier CVE-2025-68665 (CVSS rating: 8.6).
It impacts the next npm packages –
@langchain/core >= 1.0.0, < 1.1.8 (Mounted in 1.1.8)
@langchain/core < 0.3.80 (Mounted in 0.3.80)
langchain >= 1.0.0, < 1.2.3 (Mounted in 1.2.3)
langchain < 0.3.37 (Mounted in 0.3.37)
In gentle of the criticality of the vulnerability, customers are suggested to replace to a patched model as quickly as doable for optimum safety.
“The most typical assault vector is thru LLM response fields like additional_kwargs or response_metadata, which may be managed through immediate injection after which serialized/deserialized in streaming operations,” Porat mentioned. “That is precisely the sort of ‘AI meets basic safety’ intersection the place organizations get caught off guard. LLM output is an untrusted enter.”
