In December 2025, a significant cyber incident occurred in Osaka when a 17-year-old was detained under Japan’s Unauthorized Access Prohibition Act. This individual utilized malicious software to extract the personal information of over 7 million users of Kaikatsu Club, a prominent Japanese internet cafe chain. His motive was surprisingly simple: purchasing Pokémon cards. This incident highlights an emerging trend in cybercrime.
The Evolution of AI-Assisted Cyber Attacks
2025 marked a pivotal year for AI-driven cyber attacks, with advanced AI systems enhancing their capabilities significantly. Previously error-prone coding assistants evolved into comprehensive coding solutions, doubling the frequency and severity of cybercrimes. Reports indicated a 75% increase in malicious software on public platforms, a 35% rise in cloud breaches, and AI-generated phishing surpassing human efforts. Intriguingly, the demographic of attackers shifted, with non-technical individuals now conducting sophisticated attacks.
For instance, in February 2025, three teenagers with no prior programming experience utilized ChatGPT to develop a tool that targeted Rakuten Mobile’s systems 220,000 times, using their gains for gaming and gambling. In July, an individual exploited the advanced capabilities of Claude Code to extort 17 organizations, showcasing the potential of AI in orchestrating complex cybercrimes.
Cybercrime Statistics and Trends
Throughout 2025, cyber activity indicators like bot operations, malware distribution, and targeted breaches soared. Analyzing trends, Sonatype reported a surge from 55,000 malicious packages in 2022 to 454,600 by 2025. The release of GPT-4 in 2023 and subsequent advancements contributed significantly to these leaps. These developments have drastically reduced the time required to exploit vulnerabilities, decreasing from over 700 days in 2020 to just 44 days in 2025.
Moreover, the performance of AI models on software development benchmarks improved exponentially. By December 2025, leading AI models could resolve nearly 81% of real GitHub issues, a significant increase from previous years. This rapid progress in AI-assisted coding has intensified the frequency and impact of cyber attacks.
Addressing the Growing Threat
Despite advancements in AI for defense, the pace of AI-driven attacks currently outstrips defensive measures. According to Edgescan’s 2025 report, the average remediation time for critical vulnerabilities now stands at 74 days, with a significant portion remaining unresolved in large organizations. The Shai-Hulud attack in September 2025 exemplified the challenges faced, with over 500 npm packages compromised and substantial financial losses incurred.
Organizations are struggling to keep up with the sophistication of AI-generated threats, as evidenced by malicious packages mimicking legitimate libraries. Traditional detection tools have proven inadequate against AI-generated malicious code, prompting a need for innovative solutions like Chainguard Libraries, which aim to neutralize entire categories of vulnerabilities. Tests revealed Chainguard Libraries’ effectiveness, blocking 99.7% of malicious npm packages.
As AI technology continues to advance, the landscape of cybersecurity remains challenging. The growing accessibility and affordability of AI tools have lowered the barriers for conducting cyber attacks, necessitating proactive measures and innovative solutions to safeguard against future threats.
