Skip to content
  • Blog Home
  • Cyber Map
  • About Us – Contact
  • Disclaimer
  • Terms and Rules
  • Privacy Policy
Cyber Web Spider Blog – News

Cyber Web Spider Blog – News

Globe Threat Map provides a real-time, interactive 3D visualization of global cyber threats. Monitor DDoS attacks, malware, and hacking attempts with geo-located arcs on a rotating globe. Stay informed with live logs and archive stats.

  • Home
  • Cyber Map
  • Cyber Security News
  • Security Week News
  • The Hacker News
  • How To?
  • Toggle search form

AI Supply Chain Attack Method Demonstrated Against Google, Microsoft Products

Posted on September 4, 2025September 4, 2025 By CWS

Researchers at Palo Alto Networks have uncovered a brand new assault methodology that would pose a big AI provide chain danger, and so they demonstrated its influence in opposition to Microsoft and Google merchandise, in addition to the potential risk for open supply initiatives.

Named ‘Mannequin Namespace Reuse’, the AI provide chain assault methodology includes risk actors registering names related to deleted or transferred fashions which are fetched by builders from platforms reminiscent of Hugging Face. 

A profitable assault can allow risk actors to deploy malicious AI fashions and obtain arbitrary code execution, Palo Alto Networks mentioned in a weblog submit describing Mannequin Namespace Reuse.

Hugging Face is a well-liked platform for internet hosting and sharing pre-trained fashions, datasets, and AI functions. When builders wish to use a mannequin, they will reference or pull it primarily based on the identify of the mannequin and the identify of its developer within the format ‘Creator/ModelName’.

In a Mannequin Namespace Reuse assault, the attacker searches for fashions whose proprietor has deleted their account or transferred it to a brand new identify, leaving the previous identify accessible for registration. 

The attacker can register an account with the focused developer’s identify, and create a malicious mannequin with a reputation that’s prone to be referenced by many — or by particularly focused — initiatives.

Palo Alto Networks researchers demonstrated the potential dangers in opposition to Google’s Vertex AI managed machine studying platform, particularly its Mannequin Backyard repository for pre-trained fashions.

Mannequin Backyard helps the direct deployment of fashions from Hugging Face and the researchers confirmed that an attacker may have abused it to conduct a Mannequin Namespace Reuse assault by registering the identify of a Hugging Face account related to a mission that had been deleted however nonetheless listed and verified by Vertex AI.Commercial. Scroll to proceed studying.

“To show the potential influence of such a way, we embedded a payload within the mannequin that initiates a reverse shell from the machine working the deployment again to our servers. As soon as Vertex AI deployed the mannequin, we gained entry to the underlying infrastructure internet hosting the mannequin — particularly, the endpoint atmosphere,” the researchers defined.

The assault was additionally demonstrated in opposition to Microsoft’s Azure AI Foundry platform for growing ML and gen-AI functions. Azure AI Foundry additionally permits customers to deploy fashions from Hugging Face, which makes it vulnerable to assaults.

“By exploiting this assault vector, we obtained permissions that corresponded to these of the Azure endpoint. This supplied us with an preliminary entry level into the consumer’s Azure atmosphere,” the researchers mentioned.

Along with demonstrating the assault in opposition to the Google and Microsoft cloud platforms, the Palo Alto workers checked out open supply repositories that may very well be vulnerable to assaults attributable to referencing Hugging Face fashions utilizing Creator/ModelName format identifiers.

“This investigation revealed 1000’s of vulnerable repositories, amongst them a number of well-known and extremely starred initiatives,” the researchers reported. “These initiatives embody each deleted fashions and transferred fashions with the unique creator eliminated, inflicting customers to stay unaware of the risk as these initiatives proceed to operate usually.”

Google, Microsoft and Hugging Face have been notified concerning the dangers, and Google has since began to carry out day by day scans for orphaned fashions to stop abuse.Nonetheless, Palo Alto identified that “the core situation stays a risk to any group that pulls fashions by identify alone. This discovery proves that trusting fashions primarily based solely on their names is inadequate and necessitates a essential reevaluation of safety in all the AI ecosystem.”

With a view to mitigate the dangers related to Mannequin Namespace Reuse, the safety agency recommends pinning the used mannequin to a particular commit to stop sudden habits modifications, cloning the mannequin and storing it in a trusted location relatively than fetching it from a third-party service, and proactively scanning code for mannequin references that would pose a danger. 

Associated: Hackers Weaponize Belief with AI-Crafted Emails to Deploy ScreenConnect

Associated: PromptLock: First AI-Powered Ransomware Emerges

Associated: Past the Immediate: Constructing Reliable Agent Methods

Security Week News Tags:Attack, Chain, Demonstrated, Google, Method, Microsoft, Products, Supply

Post navigation

Previous Post: GhostRedirector Hackers Compromise Windows Servers With Malicious IIS Module To Manipulate Search Results
Next Post: Sendmarc appoints Rob Bowker as North American Region Lead

Related Posts

Academics Build AI-Powered Android Vulnerability Discovery and Validation Tool Security Week News
WhatsApp Zero-Day Exploited in Attacks Targeting Apple Users Security Week News
Trump Cybersecurity Executive Order Targets Digital Identity, Sanctions Policies Security Week News
Apple Patches Major Security Flaws in iOS, macOS Platforms Security Week News
SASE Company Netskope Files for IPO Security Week News
FBI Aware of 900 Organizations Hit by Play Ransomware Security Week News

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News

Recent Posts

  • How to Use Email Aliases for Privacy
  • 10 Best Cloud Penetration Testing Companies in 2025
  • 10 Best AI penetration Testing Companies in 2025
  • Noisy Bear Targets Kazakhstan Energy Sector With BarrelFire Phishing Campaign
  • “GPUGate” Malware Abuses Uses Google Ads and GitHub to Deliver Advanced Malware Payload

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Archives

  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025

Recent Posts

  • How to Use Email Aliases for Privacy
  • 10 Best Cloud Penetration Testing Companies in 2025
  • 10 Best AI penetration Testing Companies in 2025
  • Noisy Bear Targets Kazakhstan Energy Sector With BarrelFire Phishing Campaign
  • “GPUGate” Malware Abuses Uses Google Ads and GitHub to Deliver Advanced Malware Payload

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News