Skip to content
  • Home
  • Cyber Map
  • About Us – Contact
  • Disclaimer
  • Terms and Rules
  • Privacy Policy
Cyber Web Spider Blog – News

Cyber Web Spider Blog – News

Globe Threat Map provides a real-time, interactive 3D visualization of global cyber threats. Monitor DDoS attacks, malware, and hacking attempts with geo-located arcs on a rotating globe. Stay informed with live logs and archive stats.

  • Home
  • Cyber Map
  • Cyber Security News
  • Security Week News
  • The Hacker News
  • How To?
  • Toggle search form
AI Assistants Vulnerable to Hidden Memory Manipulations

AI Assistants Vulnerable to Hidden Memory Manipulations

Posted on February 16, 2026 By CWS

A new cybersecurity concern threatens users who rely on AI assistants, leveraging a method termed AI Recommendation Poisoning. This technique allows attackers to embed covert instructions within ‘Summarize with AI’ buttons on various websites and emails.

Understanding the Attack Strategy

The process involves attackers embedding malicious instructions in URL parameters of seemingly innocuous AI-related links. When users click these links, their AI assistants execute the hidden commands, which instruct the AI to prioritize certain companies or products in recommendations.

This exploitation affects the AI’s memory functionalities, designed to personalize user interactions by influencing decisions on health, finance, and security without the user’s awareness. Once injected, these prompts persist across sessions, altering the AI’s responses.

Research Findings and Real-World Implications

Microsoft’s security team uncovered over 50 distinct prompts from 31 companies across 14 sectors employing this tactic for promotional purposes. Legitimate businesses have been caught embedding these manipulative attempts within their online platforms.

The researchers identified the attacks targeting popular AI platforms, including Copilot, ChatGPT, Claude, and Perplexity, using pre-filled prompt parameters. The discovery was made during an analysis of AI-related URLs within email traffic over a two-month period.

Tools and Mitigation Efforts

The ease of deploying this attack is facilitated by tools like the CiteMET NPM package and AI Share URL Creator, which provide ready-made code for incorporating memory manipulation buttons marketed as SEO enhancements for AI assistants.

Users are advised to regularly examine their AI memory settings, avoid clicking on AI-related links from unreliable sources, and scrutinize dubious recommendations by interrogating their AI’s logic. In response, Microsoft has implemented mitigation measures within Copilot and continues to strengthen defenses against such prompt injection attacks.

These developments highlight the importance of vigilance and proactive measures to safeguard AI interactions from covert manipulations, ensuring user trust and data integrity remain protected.

Cyber Security News Tags:AI assistants, AI recommendation poisoning, AI security, AI vulnerabilities, cybersecurity threats, hidden prompts, memory manipulation, Microsoft research, online security, technology news

Post navigation

Previous Post: Google Addresses Latest Chrome Zero-Day Vulnerability
Next Post: Infostealer Targets OpenClaw AI, Exposes Security Flaws

Related Posts

Palo Alto Networks Released A Mega Malware Analysis Tutorials Useful for Every Malware Analyst Palo Alto Networks Released A Mega Malware Analysis Tutorials Useful for Every Malware Analyst Cyber Security News
Hackers Attacking Remote Desktop Protocol Services from 100,000+ IP Addresses Hackers Attacking Remote Desktop Protocol Services from 100,000+ IP Addresses Cyber Security News
Linux Battery Utility Flaw Lets Hackers Bypass Authentication and Tamper System Settings Linux Battery Utility Flaw Lets Hackers Bypass Authentication and Tamper System Settings Cyber Security News
Attackers Abuse Discord to Deliver Clipboard Hijacker That Steals Wallet Addresses on Paste Attackers Abuse Discord to Deliver Clipboard Hijacker That Steals Wallet Addresses on Paste Cyber Security News
DHS Asks OpenAI To Share Information on ChatGPT Prompts Used By Users DHS Asks OpenAI To Share Information on ChatGPT Prompts Used By Users Cyber Security News
Lessons From Mongobleed Vulnerability (CVE-2025-14847) That Actively Exploited In The Wild Lessons From Mongobleed Vulnerability (CVE-2025-14847) That Actively Exploited In The Wild Cyber Security News

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News

Recent Posts

  • LockBit 5.0 Targets Multiple Systems with Enhanced Ransomware
  • Cloud Password Managers Face Security Challenges
  • Noodlophile Malware Uses Fake Jobs to Evade Security
  • Infostealer Targets OpenClaw AI, Exposes Security Flaws
  • AI Assistants Vulnerable to Hidden Memory Manipulations

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025

Recent Posts

  • LockBit 5.0 Targets Multiple Systems with Enhanced Ransomware
  • Cloud Password Managers Face Security Challenges
  • Noodlophile Malware Uses Fake Jobs to Evade Security
  • Infostealer Targets OpenClaw AI, Exposes Security Flaws
  • AI Assistants Vulnerable to Hidden Memory Manipulations

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News