SecurityWeek’s Cyber Insights 2026 examines professional opinions on the anticipated evolution of greater than a dozen areas of cybersecurity curiosity over the following 12 months. We spoke to a whole lot of particular person specialists to achieve their professional opinions. Right here we discover malware and malicious assaults within the age of synthetic intelligence (AI).
The large takeaway from 2026 onward is the arrival and more and more efficient use of AI, and particularly agentic AI, that can revolutionize the assault state of affairs. The one query is how shortly.
Michael Freeman, head of menace intelligence at Armis, predicts, “By mid-2026, at the very least one main international enterprise will fall to a breach induced or considerably superior by a completely autonomous agentic AI system.”
These methods, he continues, “use reinforcement studying and multi-agent coordination to autonomously plan, adapt, and execute a whole assault lifecycle: from reconnaissance and payload technology to lateral motion and exfiltration. They constantly alter their strategy based mostly on real-time suggestions. A single operator will now have the ability to merely level a swarm of brokers at a goal.”
The UK’s NCSC is barely extra reserved: “The event of totally automated, end-to-end superior cyberattacks is unlikely [before] 2027. Expert cyber actors might want to stay within the loop. However expert cyber actors will virtually actually proceed to experiment with automation of parts of the assault chain…”
Each opinions may very well be correct. We don’t but understand how the adversarial use of AI will pan out over the following few years. What we do know is that assaults will enhance in quantity, pace and focusing on, assisted by synthetic intelligence.
Malware, malicious assaults and AI
Results
Virtually each section of an assault chain might be automated by AI. One instance is the pace with which attackers will reverse engineer a newly launched patch, develop an exploit for the vulnerability and uncover which corporations are weak virtually actually earlier than the common firm can provoke the patch.Commercial. Scroll to proceed studying.
A second instance may very well be the supply of finely focused assaults on the scale of conventional spray and pray assaults. “Malware is turning into much more focused and private. Attackers are transferring away from mass ‘spray and pray’ techniques and are specializing in particular people, organizations, or methods,” says Mehran Farimani, CEO at RapidFort.
“Through the use of knowledge gathered from social media, breaches, and on-line habits,” he continues, “they’ll craft assaults that look legit and exploit very particular vulnerabilities. Future malware will really feel smarter and stealthier, adapting to defenses, studying from consumer habits, and mixing into regular exercise.”
“Overlook ‘spray and pray’,” provides Shaun Cooney, CPTO at Promon, “that is extra akin to mass focusing on with a sniper rifle.”
James Wickett, CEO at DryRun Safety, provides the low price of utilizing AI to the advance of precision focusing on. “The economics have flipped,” he says. “The associated fee to go from vulnerability discovery to use was once weeks and 1000’s of {dollars}. Now it’s close to zero. So as a substitute of mass ‘spray and pray’ campaigns, we’ll get micro-targeted assaults constructed for a single system, a single firm, possibly even a single developer.”
A 3rd instance is the media’s headline menace from AI – the automation of the entire assault lifecycle from vulnerability detection, exploit manufacturing, to malware payload supply and knowledge exfiltration. Cory Michal, CSO of AppOmni, calls it the rise of ‘vibe-hacking’. “We’ve noticed attackers utilizing AI to routinely generate knowledge extraction code, reconnaissance scripts, and even adversary-in-the-middle toolkits that adapt to protection. They’re primarily ‘vibe-hacking’ utilizing generative AI to raised mimic genuine habits, refine social engineering lures, and speed up the technical features of intrusion and exploitation.”
When these elements might be chained collectively below the orchestration of agentic AI, we will probably be nearer to the one-click totally automated assault.
“LLM-enabled malware has already moved from proof-of-concept to observe,” says Steve Stone, SVP of menace discovery & response at SentinelOne. “Our discovery of MalTerminal (the earliest identified GPT4-powered malware able to producing ransomware or reverse-shell code at runtime), together with ESET’s PromptLock pattern and rising campaigns like LameHug and PromptSteal, present how attackers are experimenting with AI to create polymorphic, self-evolving payloads.”
These instruments blur the road between code and dialog, he continued, “permitting malicious logic to be generated dynamically and evade conventional signatures.”
AI brokers can already put together the levels whereas agentic AI would be the glue that chains them behind a single click on. We’re not there but, however the potential exists and that future will undoubtedly come.
Ransomware
Extortion will stay a main function of malicious assaults merely due to its success. Based on FinCEN, $2.1 billion was paid in ransoms throughout the three years 2022 to 2024. In 2023 the determine amounted to $1.1 billion (the all-time excessive) however subsided to $734 million in 2024.
Two years can hardly be thought-about a development, however many commenters consider that ransomware is slowly turning into much less profitable attributable to elevated stress in opposition to ransom funds and improved cyber defenses. Counter intuitively, if true, this ‘development’ could also be strengthened relatively than reversed by the rise of AI.
Jason Baker, Managing Safety Advisor of Menace Intelligence at GuidePoint Safety.
Jason Baker, managing safety advisor of menace intelligence at GuidePoint Safety, explains. “AI-generated ransomware, or different malware used for extortion, presents an issue for the customers – specifically, they’re unlikely to totally perceive the way it works, or learn how to troubleshoot or debug points.”
Now think about you’re an extortionist, he continues. “Your sufferer has paid, and your AI-generated decryption software doesn’t work. How do you repair this? Do you have got any incentive to repair it? And the way lengthy do folks maintain paying you ransoms as soon as the phrase will get out you could’t undo the injury you’ve finished?”
The return of DDoS?
DDoS declined due to the success of ransomware – however it could return attributable to any decline in ransomware. “Attackers are reverting to one in all their oldest and most disruptive instruments: the denial-of-service assault. In 2026, we’ll see a record-setting resurgence of DDoS exercise: the most important volumetric assault ever recorded, and the best requests-per-second fee in historical past,” warns David Holmes, software safety CTO at Thales.
He notes that Imperva’s community is already seeing early indicators: assaults which might be 50% bigger than something we’ve seen earlier than.
“For menace actors, the playbook is straightforward. If they’ll’t extort you with encryption, they’ll take you offline as a substitute. Organizations that spent the previous few years fortifying in opposition to ransomware will now should look outward once more, reinforcing cloud-based DDoS safety and adaptive mitigation to face up to the following wave. The attackers haven’t disappeared; they’ve simply modified techniques, and in 2026, they’ll come roaring again.”
AI will play a serious half in enabling and bettering the effectivity of those DDoS assaults.
The no-malware different
The no-malware different isn’t fully no-malware, however the malware is restricted to 3rd celebration infostealers.
“The defining shift in malware heading into 2026 is the consolidation of the whole assault chain round infostealers. They’ve turn out to be the entry level, the information dealer, the reconnaissance layer, and the gas for all the things that comes after,” suggests the Flashpoint Analyst Group, noting that 1.8 billion credentials have been stolen by infostealers within the first half of 2025.
The Group continues, “AI-generated malware will get headlines, however menace actors don’t want totally autonomous malware when infostealers already automate the toughest half: preliminary compromise at scale.” Those self same stealers now not simply acquire passwords – in addition they acquire session cookies, entry tokens, host metadata, browser profiles and extra. The attacker can assume the sufferer’s id outright.
As soon as contained in the goal community, a seasoned attacker can dwell off the land (LotL) successfully invisibly till knowledge exfiltration with out the usage of any malware.
This state of affairs is supported by Adrian Culley, senior gross sales engineer at SafeBreach. “The popular methodology of intrusion is shifting universally towards Id-led, malware-free Intrusions,” he says. “The give attention to LotL TTPs permits intrusions to mix into regular community exercise.”
Infostealers can present quick access, whereas LotL gives stealthy assortment and exfiltration of information with out requiring malware. Extortion could stay the precedence motive, however “Assume much less ‘pay to decrypt’, and extra ‘pay to cease leaks’,” suggests Yaz Bekkar, principal consulting architect XDR, at Barracuda Networks.
The brand new legal ecosystem
Hacker ranges
Solely refined organized crime teams and nation state actors may have the speedy technical ability to understand the complete potential of synthetic intelligence. However AI is eradicating the entry barrier for brand spanking new and unskilled hackers. Consequently, there will probably be three distinct lessons of dangerous actor sooner or later: elite nation state, organized crime, and a quickly increasing script kiddie stage.
“The legal ecosystem will change,” explains Bekkar. “With AI, you don’t want deep abilities, you want concepts. As boundaries to entry drop even additional, extra low-skilled actors will turn out to be extra harmful, quicker. On the similar time, the dominant gangs received’t disappear; as a substitute, they’ll run ‘platforms’ and affiliate packages, renting out AI-driven kits.”
“The barrier to entry has collapsed, giving novice attackers much more attain,” says Farimani. The brief time period impact will probably be extra environment friendly and extra finely focused assaults from the established cybercrime gangs and nation state actors, and an enormous enhance in much less refined assaults by the script kiddies.
The general impact of the script kiddie wave is unclear. Baker suggests, “Decrease information boundaries will enhance the quantity of assaults however not essentially the sophistication. Nicely-defended organizations will nonetheless have the ability to filter out nearly all of unsophisticated assaults.”
Nonetheless, “Whereas these people won’t match nation-states in sources or intelligence-gathering, they’ll have unprecedented energy to launch high-impact assaults. This democratization of functionality means the general menace quantity and variety will develop considerably,” warns Matt Gorham, chief of PwC’s cyber and danger innovation institute.
“Might script kiddies function like a nation-state? Not when it comes to functionality, however with stealer logs delivering turnkey entry, the injury they’ll trigger begins to look uncomfortably related,” provides the Flashpoint Analyst Group.
“Cyberattacks will probably be simply as damaging as nation-state assaults subsequent 12 months,” says Dave Spencer, director of technical product administration at Immersive. “Criminals don’t have to be refined to trigger hurt. Have a look at Scattered Spider – youngsters calling assist desks and resetting passwords. That’s not refined.” However it has actually been efficient.
DryRun’s Wickett: “AI received’t make everybody a hacker in a single day, however it should shut the hole between the script kiddie and a brand new, bespoke APT.”
“As know-how continues to democratize entry to superior capabilities,” continues Adam Darrah, VP of Intelligence at ZeroFox, “that hole will maintain narrowing. The result’s a a lot bigger pool of actors, extra noise, and extra danger throughout the board.” The script kiddies will turn out to be higher script kiddies.
The legal underworld
One query stays: will the shakeup occurring within the energetic hacking world reshape the legal underworld market?
“The large cash will transfer from stolen identities to stolen code and commerce secrets and techniques – issues AI methods can straight weaponize or be taught from,” suggests Wickett. “As a substitute of promoting uncooked malware, folks will promote tailor-made toolchains: prebuilt reconnaissance scripts, AI-driven exploit builders, and entry kits for particular industries. The following underground market isn’t going to appear like a ransomware-as-a-service discussion board. It’s going to look extra like GitHub for dangerous actors.”
“Automation could disrupt middlemen however may even create new marketplaces for specialised AI malware, zero-day commoditization and tailor-made exfiltration companies. As enterprise IP turns into extra profitable and simpler to monetize, markets will doubtless shift towards high-value company IP and commerce secrets and techniques alongside id knowledge,” agrees Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster College.
Dario Perfettibile, VP and GM of European operations at Kiteworks, means that the underworld market will observe the AI-driven shift towards precision focusing on. “This transformation will weaken darkish net markets for bulk stolen credentials whereas elevating demand for curated entry to particular knowledge alternate platforms. Quite than promoting hundreds of thousands of compromised accounts, criminals will dealer focused entry to exchanges dealing with beneficial IP, proprietary algorithms, or aggressive intelligence.”
GuidePoint Safety’s Baker sees an identical relationship between underworld choices and above floor operations. Invoking his view that hackers will fear about their capacity to troubleshoot AI generated malware, “The necessity for dependable and fixable malware will doubtless stay, although its buyer base could turn out to be extra concentrated or restricted,” he suggests. “Malware-as-a-Service stays a worthwhile enterprise mannequin and could also be perceived as much less prone to entice regulation enforcement scrutiny than straight conducting intrusions.”
The demand for MaaS might even enhance with the anticipated development of script kiddie hackers who could not have the experience to develop their very own malware.
Additionally mirroring the hacker migration to AI, Barracuda Networks’ Bekkar suggests, “AI turns commodity malware into one thing that’s successfully free-of-charge. Brokers pushing fundamental kits or generic entry will turn out to be much less related as the worth shifts to what’s now actually scarce: high-quality preliminary entry, verified company knowledge, bespoke exploits, and, above all, stolen mental property.”
Charlie Eriksen, safety researcher at Aikido Safety, sees a downsizing. “Massive knowledge brokers are giving option to smaller teams buying and selling particular varieties of stolen knowledge or entry. We’ve seen this sample in a number of main supply-chain compromises that started with stolen publishing credentials. The market is shifting from buying and selling stolen identities to buying and selling stolen belief, and that’s the place a lot of the chance now lies.”
The Flashpoint Analyst Group agrees with the ‘belief’ factor, however not essentially any downsizing. “Quite than weakening conventional entry brokers, infostealers are reworking them. As a substitute of promoting RDP or VPN entry manually, brokers now transfer bulk id profiles enriched with metadata: system specs, geolocation, company domains, session tokens, and host fingerprints.”
Backed by the big and rising success of infostealers, “{The marketplace} is shifting from stolen credentials to full digital identities that enable high-confidence impersonation. Stolen IP, supply code, and proprietary knowledge have gotten extra widespread in stealer logs as a result of attackers are scraping developer instruments, browser-stored secrets and techniques, and cloud app credentials straight from contaminated endpoints. Darkish-web markets are beginning to look extra like identity-based provide chains.”
The underworld market will inevitably observe the above floor hacker demand, however each are at the moment in a state of flux.
Cybersecurity protection within the age of AI assaults
Jim Salter, senior administration advisor at CyXcel, factors to a remark from the UK’s NCSC: “Cybercriminal attackers goal vulnerabilities, not sectors, so each group with digital property is a possible goal.”
He feedback, “As reliance on digital infrastructure in corporations of all sizes grows, the chance for cyber criminals to use vulnerabilities may even develop.” The incidence of potential vulnerabilities can be growing, by the speedy deployment of vibe coding.
Julie Davila, VP of product safety at GitLab expands, “Subsequent 12 months will carry a tidal wave of safety danger as adversarial brokers decrease the boundaries to execute more and more advanced assaults. In different phrases, brokers make it a lot simpler to use any vulnerability inside a system. The exploitation ‘chance lever’ for each vulnerability has simply gone up.”
Julie Davila, VP of Product Safety at GitLab.
She provides, “Organizations which have prioritized foundational safety hygiene, together with environment friendly patch administration, will probably be higher ready to defend themselves and reduce present danger throughout software program environments and their software program provide chain.”
That is the ‘eat your cyber veggies’ exhortation from corporations reminiscent of Cisco and Splunk. Consuming greens is boring however important for well being. Cyber veggies are the cyber hygiene fundamentals: patching, phishing-proof MFA, least privilege, segmentation backups, etcetera.
Mick Baccio, international safety advisor at Cisco Basis AI, feedback, “The constructing blocks of safety, the cyber veggies, have been round for a very long time; and should you don’t do them, dangerous issues occur. They’re tremendous relevant to issues like AI and software program improvement. There’s no silver bullet, after all, however it should clear up an incredible variety of issues for issues like account takeover, lateral motion, and the vulnerabilities that shouldn’t exist.”
If you wish to survive the malicious facet of AI, it’s important that you just begin with the cyber veggies. However since there actually is not any silver bullet, you continue to have to layer further safety on high.
“AI-enabled malware mutates its code, making conventional signature-based detection ineffective. Defenders want behavioral EDR that focuses on what malware does, not what it seems like,” says AppOmni’s Michal. “Detection ought to key in on uncommon course of creation, scripting exercise, or surprising outbound site visitors particularly to AI APIs like Gemini, Hugging Face or OpenAI.”
He continues, “By correlating behavioral indicators throughout endpoint, SaaS, and id telemetry, organizations can spot when attackers are abusing AI and cease them earlier than knowledge is exfiltrated.”
RapidFort’s Farimani stresses. “The main focus of safety groups should shift to minimizing publicity and lowering time-to-remediation, as a result of the offensive facet is already automated.”
In brief, “In 2026, cyber resilience will depend upon out-learning, not simply out-blocking, the adversary,” explains Kirsty Paine, discipline CTO at Splunk and fellow at WEF.
“In 2026, we’ll see the rise of AI-enabled malware that may autonomously adapt in actual time to evade detection. We’ve already seen hints of this from analysis proof of ideas like BlackMamba, however subsequent 12 months we are able to anticipate to see AI-enabled malware deployed in more and more sophisticated assaults that be taught, mix in, and modify their habits based mostly on environmental indicators and not using a human operator ever touching the keyboard. This shift will reinforce the relevance of David Bianco’s ‘Pyramid of Ache’ the place, as adversaries rely much less on static artifacts on the backside of the pyramid, defenders should transfer larger to give attention to proactively disrupting attacker instruments, behaviors, and TTPs.”
Closing ideas
From 2026 onward, organizations might want to double down on the significance of their cybersecurity. It’s not that synthetic intelligence will invent new threats, however it should discover and exploit vulnerabilities with larger stealth significantly quicker and in larger volumes than we now have seen earlier than.
We might want to consider the fundamentals. We should eat our cyber veggies; after which we should overlay further layers of safety. We might want to use our personal AI to detect and block the attackers’ use of AI; whereas concurrently guaranteeing they can not flip our methods in opposition to us by hijacking our agentic AI’s APIs, which we could not even learn about.
It ain’t gonna be straightforward, however it’s gotta be finished if we wish to survive and thrive.
Associated: The Wild West of Agentic AI – An Assault Floor CISOs Can’t Afford to Ignore
Associated: Past GenAI: Why Agentic AI Was the Actual Dialog at RSA 2025
Associated: AI Emerges because the Hope—and Danger—for Overloaded SOCs
Associated: 5 Important Steps to Put together for AI-Powered Malware in Your Related Asset Ecosystem
