OpenAI introduced it has banned a collection of ChatGPT accounts linked to Chinese language state-affiliated hacking teams that used the AI fashions to refine malware and create phishing content material.
The October 2025 report particulars the disruption of a number of malicious networks as a part of the corporate’s ongoing dedication to stopping the abuse of its AI applied sciences by menace actors and authoritarian regimes.
Since February 2024, OpenAI has disrupted over 40 networks that violated its utilization insurance policies. The corporate said that it continues to see menace actors incorporate AI into present methods to extend pace and effectivity, slightly than creating novel offensive capabilities with the fashions.
China-Linked Actors Improve Cyber Operations
A key case examine within the report focuses on a bunch named OpenAI, named “Cyber Operation Phish and Scripts.” This cluster of accounts, operated by Chinese language-speaking people, was used to help in malware improvement and phishing campaigns.
OpenAI’s investigation discovered that the group’s actions had been per cyber operations serving the intelligence necessities of the Folks’s Republic of China (PRC). The exercise additionally overlapped with menace teams publicly tracked as UNKDROPPITCH and UTA0388.
These hackers used ChatGPT for 2 main capabilities:
Malware Improvement: They used the AI to assist develop and debug tooling, with implementation particulars overlapping with malware referred to as GOVERSHELL and HealthKick. The actors additionally researched additional automation prospects utilizing different AI fashions like DeepSeek.
Phishing Content material Technology: The group created focused and culturally tailor-made phishing emails in a number of languages, together with Chinese language, English, and Japanese. Their targets included Taiwan’s semiconductor sector, U.S. academia, and organizations vital of the Chinese language authorities.
OpenAI famous that the actors used the fashions to achieve “incremental effectivity,” equivalent to crafting higher phishing emails and shortening coding cycles, slightly than creating new varieties of threats.
The report additionally detailed the disruption of different accounts linked to Chinese language authorities entities. These customers tried to make use of ChatGPT for creating surveillance and profiling instruments.
One banned person sought assist in drafting a proposal for a “Excessive-Threat Uyghur-Associated Influx Warning Mannequin,” designed to investigate journey bookings and police information.
One other occasion concerned an try to design a “social media probe” able to scanning platforms like X (previously Twitter), Fb, and Reddit for political, ethnic, and spiritual content material deemed “extremist.”
Different customers had been banned for utilizing the AI to analysis critics of the Chinese language authorities and determine the funding sources of accounts vital of the PRC.
Mitigations
In response to those findings, OpenAI disabled all accounts related to the malicious actions and shared indicators of compromise with business companions to assist in broader cybersecurity efforts.
The report emphasizes that the AI fashions themselves typically acted as a security barrier, refusing direct requests to generate malicious code or execute exploits. The actors had been restricted to producing “building-block” code snippets that weren’t inherently malicious on their very own.
OpenAI’s findings point out that whereas state-sponsored actors are actively experimenting with AI, its main use is to reinforce present operations.
The corporate confused that it continues to spend money on detecting and disrupting such abuses to stop its instruments from getting used for malicious cyber exercise, scams, and covert affect operations.
Cyber Consciousness Month Provide: Upskill With 100+ Premium Cybersecurity Programs From EHA’s Diamond Membership: Be a part of Right this moment