Skip to content
  • Home
  • Cyber Map
  • About Us – Contact
  • Disclaimer
  • Terms and Rules
  • Privacy Policy
Cyber Web Spider Blog – News

Cyber Web Spider Blog – News

Globe Threat Map provides a real-time, interactive 3D visualization of global cyber threats. Monitor DDoS attacks, malware, and hacking attempts with geo-located arcs on a rotating globe. Stay informed with live logs and archive stats.

  • Home
  • Cyber Map
  • Cyber Security News
  • Security Week News
  • The Hacker News
  • How To?
  • Toggle search form
Microsoft Exposes AI Chatbot Manipulation Techniques

Microsoft Exposes AI Chatbot Manipulation Techniques

Posted on February 17, 2026 By CWS

Microsoft has uncovered a new technique where businesses are manipulating AI chatbots using ‘Summarize with AI’ buttons on websites. This method, resembling traditional search engine poisoning, has been named AI Recommendation Poisoning by Microsoft’s Defender Security Research Team. It involves injecting bias into AI systems to influence their responses and recommendations.

Understanding AI Recommendation Poisoning

The technique involves embedding hidden instructions within the ‘Summarize with AI’ buttons. When clicked, these buttons inject persistent commands into an AI assistant’s memory, which can skew recommendations in favor of certain companies. Microsoft identified over 50 such prompts from 31 businesses across various sectors, highlighting potential risks to transparency and trust.

These manipulative actions are executed through specially crafted URLs that pre-populate AI chatbots with biased prompt instructions. This approach is a variant of AI Memory Poisoning, which can also occur through social engineering or cross-prompt injections.

Mechanics of Manipulation

In a typical scenario, clicking a ‘Summarize with AI’ button executes pre-filled commands that manipulate the AI’s memory. Microsoft has noted that such links are also being distributed via emails, further expanding their reach. Examples include URLs that direct the AI to remember specific sources as authoritative for certain topics.

The manipulation relies on the AI’s inability to differentiate genuine user preferences from those inserted by external entities. This has led to the proliferation of tools like CiteMET and AI Share Button URL Creator, which facilitate embedding promotional content into AI assistants.

Implications and Preventive Measures

The consequences of such manipulation are significant, potentially leading to the dissemination of false information and undermining trust in AI-driven insights. Users often accept AI-generated recommendations without verification, making this form of manipulation particularly dangerous.

To mitigate these risks, users are advised to audit AI assistant memories regularly, be cautious of AI-related links from untrusted sources, and approach ‘Summarize with AI’ buttons with skepticism. Organizations should monitor for URLs that contain suspicious prompt instructions to identify potential manipulation.

The rise of AI Recommendation Poisoning underscores the need for vigilance in maintaining the integrity and trustworthiness of AI systems, which play an increasingly vital role in decision-making processes.

The Hacker News Tags:AI, AI memory poisoning, AI recommendations, AI security, chatbot manipulation, Cybersecurity, digital trust, enterprise security, Microsoft, tech news

Post navigation

Previous Post: Langchain SSRF Vulnerability Threatens Internal Security
Next Post: New Cyber Threats Targeting ICS/OT in 2025 Identified

Related Posts

Deepfake Defense in the Age of AI Deepfake Defense in the Age of AI The Hacker News
Taiwan NSB Alerts Public on Data Risks from TikTok, Weibo, and RedNote Over China Ties Taiwan NSB Alerts Public on Data Risks from TikTok, Weibo, and RedNote Over China Ties The Hacker News
Kimwolf Android Botnet Infects Over 2 Million Devices via Exposed ADB and Proxy Networks Kimwolf Android Botnet Infects Over 2 Million Devices via Exposed ADB and Proxy Networks The Hacker News
Fake WhatsApp API Package on npm Steals Messages, Contacts, and Login Tokens Fake WhatsApp API Package on npm Steals Messages, Contacts, and Login Tokens The Hacker News
Microsoft Unveils Windows Terminal Exploit in ClickFix Campaign Microsoft Unveils Windows Terminal Exploit in ClickFix Campaign The Hacker News
ClickFix Attacks Expand Using Fake CAPTCHAs, Microsoft Scripts, and Trusted Web Services ClickFix Attacks Expand Using Fake CAPTCHAs, Microsoft Scripts, and Trusted Web Services The Hacker News

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News

Recent Posts

  • Node.js Developers Face Advanced Social Engineering Threat
  • Hackers Exploit Code Leak to Spread Malware via GitHub
  • Fortinet Issues Patch for Critical FortiClient EMS Vulnerability
  • Progress ShareFile Flaws Risk Server Takeover
  • European Commission Data Breach from Trivy Attack Unveiled

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Archives

  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025

Recent Posts

  • Node.js Developers Face Advanced Social Engineering Threat
  • Hackers Exploit Code Leak to Spread Malware via GitHub
  • Fortinet Issues Patch for Critical FortiClient EMS Vulnerability
  • Progress ShareFile Flaws Risk Server Takeover
  • European Commission Data Breach from Trivy Attack Unveiled

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News

Copyright © 2026 Cyber Web Spider Blog – News.

Powered by PressBook Masonry Dark