A essential vulnerability in GitHub Copilot Chat, rated 9.6 on the CVSS scale, might have allowed attackers to exfiltrate supply code and secrets and techniques from personal repositories silently.
The exploit mixed a novel immediate injection method with a intelligent bypass of GitHub’s Content material Safety Coverage (CSP), granting the attacker vital management over a sufferer’s Copilot occasion, together with the flexibility to recommend malicious code or hyperlinks. The vulnerability was reported responsibly by way of HackerOne, and GitHub has since patched the problem.
GitHub Copilot Vulnerability
The assault started by exploiting GitHub Copilot’s context-aware nature. The AI assistant is designed to make use of info from a repository, akin to code and pull requests, to offer related solutions.
Legit Safety researchers discovered that they might embed a malicious immediate instantly right into a pull request description utilizing GitHub’s “invisible feedback” characteristic.
Whereas the remark itself is hidden from view within the person interface, Copilot would nonetheless course of its contents. This meant an attacker might create a pull request containing a hidden malicious immediate, and any developer who later used Copilot to investigate that pull request would have their session compromised.
As a result of Copilot operates with the permissions of the person making the request, the injected immediate might command the AI to entry and manipulate information from the sufferer’s personal repositories.
Bypassing Safety With A URL Dictionary
A serious hurdle for the attacker was GitHub’s strict Content material Safety Coverage (CSP), which prevents the AI from leaking information to exterior domains.
GitHub makes use of a proxy service referred to as Camo to securely render photographs from third-party websites. Camo rewrites exterior picture URLs into signed camo.githubusercontent.com hyperlinks, and solely URLs with a sound signature generated by GitHub are processed.
This prevents attackers from merely injecting an tag to ship information to their very own server. To avoid this, the researcher devised an ingenious methodology.
They pre-generated a dictionary of legitimate Camo URLs for each letter and image. Every URL pointed to a 1×1 clear pixel on a server they managed, in keeping with a legit Safety report.
The ultimate injected immediate instructed Copilot to search out delicate info in a sufferer’s personal repository, akin to an AWS key or a zero-day vulnerability description.
It could then “draw” this info as a sequence of invisible photographs utilizing the pre-generated Camo URL dictionary.
When the sufferer’s browser rendered these photographs, it despatched a sequence of requests to the attacker’s server, successfully leaking the delicate information one character at a time.
The proof-of-concept demonstrated the profitable exfiltration of code from a personal repository. In response to the disclosure, GitHub remediated the vulnerability on August 14, 2025, by utterly disabling all picture rendering throughout the Copilot Chat characteristic, neutralizing the assault vector.
Comply with us on Google Information, LinkedIn, and X for each day cybersecurity updates. Contact us to characteristic your tales.