Researchers at Palo Alto Networks have uncovered a brand new assault methodology that would pose a big AI provide chain danger, and so they demonstrated its influence in opposition to Microsoft and Google merchandise, in addition to the potential risk for open supply initiatives.
Named ‘Mannequin Namespace Reuse’, the AI provide chain assault methodology includes risk actors registering names related to deleted or transferred fashions which are fetched by builders from platforms reminiscent of Hugging Face.
A profitable assault can allow risk actors to deploy malicious AI fashions and obtain arbitrary code execution, Palo Alto Networks mentioned in a weblog submit describing Mannequin Namespace Reuse.
Hugging Face is a well-liked platform for internet hosting and sharing pre-trained fashions, datasets, and AI functions. When builders wish to use a mannequin, they will reference or pull it primarily based on the identify of the mannequin and the identify of its developer within the format ‘Creator/ModelName’.
In a Mannequin Namespace Reuse assault, the attacker searches for fashions whose proprietor has deleted their account or transferred it to a brand new identify, leaving the previous identify accessible for registration.
The attacker can register an account with the focused developer’s identify, and create a malicious mannequin with a reputation that’s prone to be referenced by many — or by particularly focused — initiatives.
Palo Alto Networks researchers demonstrated the potential dangers in opposition to Google’s Vertex AI managed machine studying platform, particularly its Mannequin Backyard repository for pre-trained fashions.
Mannequin Backyard helps the direct deployment of fashions from Hugging Face and the researchers confirmed that an attacker may have abused it to conduct a Mannequin Namespace Reuse assault by registering the identify of a Hugging Face account related to a mission that had been deleted however nonetheless listed and verified by Vertex AI.Commercial. Scroll to proceed studying.
“To show the potential influence of such a way, we embedded a payload within the mannequin that initiates a reverse shell from the machine working the deployment again to our servers. As soon as Vertex AI deployed the mannequin, we gained entry to the underlying infrastructure internet hosting the mannequin — particularly, the endpoint atmosphere,” the researchers defined.
The assault was additionally demonstrated in opposition to Microsoft’s Azure AI Foundry platform for growing ML and gen-AI functions. Azure AI Foundry additionally permits customers to deploy fashions from Hugging Face, which makes it vulnerable to assaults.
“By exploiting this assault vector, we obtained permissions that corresponded to these of the Azure endpoint. This supplied us with an preliminary entry level into the consumer’s Azure atmosphere,” the researchers mentioned.
Along with demonstrating the assault in opposition to the Google and Microsoft cloud platforms, the Palo Alto workers checked out open supply repositories that may very well be vulnerable to assaults attributable to referencing Hugging Face fashions utilizing Creator/ModelName format identifiers.
“This investigation revealed 1000’s of vulnerable repositories, amongst them a number of well-known and extremely starred initiatives,” the researchers reported. “These initiatives embody each deleted fashions and transferred fashions with the unique creator eliminated, inflicting customers to stay unaware of the risk as these initiatives proceed to operate usually.”
Google, Microsoft and Hugging Face have been notified concerning the dangers, and Google has since began to carry out day by day scans for orphaned fashions to stop abuse.Nonetheless, Palo Alto identified that “the core situation stays a risk to any group that pulls fashions by identify alone. This discovery proves that trusting fashions primarily based solely on their names is inadequate and necessitates a essential reevaluation of safety in all the AI ecosystem.”
With a view to mitigate the dangers related to Mannequin Namespace Reuse, the safety agency recommends pinning the used mannequin to a particular commit to stop sudden habits modifications, cloning the mannequin and storing it in a trusted location relatively than fetching it from a third-party service, and proactively scanning code for mannequin references that would pose a danger.
Associated: Hackers Weaponize Belief with AI-Crafted Emails to Deploy ScreenConnect
Associated: PromptLock: First AI-Powered Ransomware Emerges
Associated: Past the Immediate: Constructing Reliable Agent Methods