A number of vital zero‑day vulnerabilities in PickleScan, a well-liked open‑supply device used to scan machine studying fashions for malicious code.
PickleScan is broadly used within the AI world, together with by Hugging Face, to verify PyTorch fashions saved with Python’s pickle format.
Pickle is versatile however harmful, as a result of loading a pickle file can run arbitrary Python code. Meaning a mannequin file can secretly embrace instructions to steal knowledge, set up backdoors, or take over a system.
Malicious PyTorch Fashions Set off Code Execution
JFrog’s workforce discovered that attackers might use these flaws to bypass PickleScan’s checks and nonetheless run malicious code when the mannequin is loaded in PyTorch.
Official documentation of Python’s pickle module with a person warning
The primary bug, CVE‑2025‑10155, lets attackers dodge scanning by merely altering the file extension.
A malicious pickle file renamed to a PyTorch‑type extension like .bin or .pt can confuse PickleScan, inflicting it to fail to research the content material. On the similar time, PyTorch nonetheless masses and runs it.
CVE IDVulnerability NameCVSS ScoreSeverityCVE-2025-10155File Extension Bypass9.3CriticalCVE-2025-10156CRC Bypass in ZIP Archives9.3CriticalCVE-2025-10157Unsafe Globals Bypass9.3Critical
The second bug, CVE‑2025‑10156, abuses how ZIP archives are dealt with by corrupting the CRC (integrity verify) values inside a ZIP file.
Attackers may cause PickleScan to crash or fail, however PyTorch should load the mannequin from that very same damaged archive. This creates a blind spot the place malware can cover.
Proof of Idea – how the file extension permits bypassing detection
The third bug, CVE‑2025‑10157, targets PickleScan’s blocklist of “unsafe” modules through the use of subclasses or inside imports of harmful modules like asyncio.
Attackers can slip previous the “Harmful” label and solely be marked as “Suspicious,” though arbitrary instructions can nonetheless be executed.
As a result of many platforms and corporations depend on PickleScan as a fundamental protection layer, these flaws create a critical provide chain danger for AI fashions.
The catalog offers exact details about the mannequin and the proof discovered inside
JFrog’s workforce reported the issues to the PickleScan maintainer on June 29, 2025, and glued them in model 0.0.31, launched on September 2, 2025.
Customers are urged to improve instantly and, when potential, keep away from unsafe pickle‑based mostly fashions. Use layered defenses similar to sandboxes, safer codecs like Safetensors, and safe mannequin repositories.
Comply with us on Google Information, LinkedIn, and X for every day cybersecurity updates. Contact us to function your tales.
