OpenAI’s Recent Security Report Unveils AI Misuse
OpenAI has disclosed that a ChatGPT account, reportedly linked to a Chinese law enforcement official, was instrumental in orchestrating significant cyberattacks. This revelation, detailed in OpenAI’s February 2026 security report, sheds light on the growing misuse of AI by state actors to conduct operations that target dissidents and critics of the Chinese Communist Party (CCP) worldwide.
The report highlights the “Cyber Special Operations,” a term used by OpenAI to describe these clandestine activities. These operations aim to stifle free speech and manipulate public perception, both within China and globally. The ChatGPT account was primarily utilized to refine periodic status updates on these ongoing campaigns, providing a unique insight into the operations of China’s state-affiliated disinformation efforts.
Scope and Tactics of Cyber Operations
The scale of these cyber operations is extensive, spanning over 300 foreign social media platforms and involving thousands of fake accounts managed by numerous operators across China. A notable incident in October 2025 involved a ChatGPT session where the user sought assistance in crafting an influence campaign against Sanae Takaichi, Japan’s first female prime minister, following her criticism of human rights practices in Inner Mongolia.
OpenAI’s analysis revealed strategies that included amplifying negative commentary and using fake identities to sway public opinion. Although ChatGPT refused to participate in these plans, the operations proceeded, with evidence of their execution confirmed through open-source investigations. These included the spread of specific hashtags and AI-generated content falsely associating Takaichi with extremist groups.
Inside China’s Disinformation Playbook
The ChatGPT sessions exposed a comprehensive strategy comprising over 100 distinct tactics aimed at silencing global dissent. Chinese AI models like DeepSeek-R1 and Qwen2.5 were used for surveillance and content creation, while ChatGPT was employed to enhance documents and reports. Targets included Chinese activist Li Ying and human rights organizations like Safeguard Defenders.
One method involved fabricating legal documents to coerce social media platforms into deactivating accounts of dissidents. These activities are linked to the “Spamouflage” campaign, which was attributed to Chinese law enforcement by Meta in 2023. OpenAI also connected this to the doxxing website revealscum..com, part of the Spamouflage network.
Recommendations for Mitigating AI-Driven Threats
Entities concerned about AI-driven influence operations are advised to enhance detection systems for inauthentic activities, especially those employing AI-generated evidence. Public figures and government officials should be cautious of unsolicited communications from dubious entities. Governments are encouraged to share insights on these threats and alert civil society about potential online harassment risks.
AI companies must enforce robust content guidelines and continue publishing comprehensive threat analyses to foster industry-wide awareness of platform misuse. These efforts are crucial in combating the effective use of AI in state-sponsored cyber operations.
Stay updated by following us on Google News, LinkedIn, and X, and mark CSN as a preferred source on Google for the latest news.
