Skip to content
  • Blog Home
  • Cyber Map
  • About Us – Contact
  • Disclaimer
  • Terms and Rules
  • Privacy Policy
Cyber Web Spider Blog – News

Cyber Web Spider Blog – News

Globe Threat Map provides a real-time, interactive 3D visualization of global cyber threats. Monitor DDoS attacks, malware, and hacking attempts with geo-located arcs on a rotating globe. Stay informed with live logs and archive stats.

  • Home
  • Cyber Map
  • Cyber Security News
  • Security Week News
  • The Hacker News
  • How To?
  • Toggle search form

Cyber Insights 2026: Offensive Security; Where It is and Where Its Going

Posted on January 28, 2026January 28, 2026 By CWS

SecurityWeek’s Cyber Insights 2026 examines skilled opinions on the anticipated evolution of greater than a dozen areas of cybersecurity curiosity over the subsequent 12 months. We spoke to tons of of particular person consultants to achieve their skilled opinions. Right here we discover offensive safety; the place it’s in the present day, and the place it’s going.

Cyber crimson teaming will change extra within the subsequent 24 months than it has prior to now ten years.

Malicious assaults are rising in frequency, sophistication and harm. Defenders want to search out and harden system weaknesses earlier than attackers can assault them. That requires crimson groups to do extra, sooner.

Offensive safety

“Offensive safety is solely a department of safety that focuses on attacking methods to determine weak spot with the intention to harden them/defend them higher,” says Matt Mullins, head hacker at Reveal Safety. 

Eyal Benishti, CEO and founder at IRONSCALES calls it ‘proactive protection’.

Eyal Benishti, CEO and founder at IRONSCALES.

“Offensive safety is about proactively simulating attacker habits to prioritize assault floor strengthening. It consists of, however extends past, conventional penetration testing into crimson teaming and bug bounty packages, offering steady, intelligence-led validation of how attackers really function. It combines human ingenuity, automation, and adversarial simulation to show weaknesses earlier than they’re exploited,” expands Julian Brownlow Davies, Senior VP of offensive safety & technique at Bugcrowd.

Pentesting and crimson teaming are the 2 major parts of offensive safety. Their strategies of operation overlap, however they serve two separate functions. Pentesting seeks to search out and exploit bugs or weaknesses. Purple teaming seeks to check a system’s capacity to face up to an precise assault. 

“Conventional pentesters have a tendency to supply snapshot views – nice for compliance however restricted in depth. Purple groups function extra like actual adversaries: persistent, stealthy, and state of affairs primarily based. Organizations with larger safety maturity are shifting towards crimson crew operations as a result of they supply extra significant insights into gaps throughout individuals, processes, and expertise,” says Benishti.Commercial. Scroll to proceed studying.

Each features are evolving and can additional evolve throughout 2026 and past. “Because the risk panorama evolves, so will offensive safety – shifting from remoted workouts to steady, built-in packages,” he continues. “The longer term is extra preemptive: combining offensive insights with risk intelligence, AI, and automation to remain forward of attackers as a substitute of reacting to them.”

Whereas the function of the impartial pentester continues, it’s more and more merging into bug bounty looking. “The mannequin is shifting towards coordinated offensive operations run by managed or crowdsourced platforms. The group gives attain and variety whereas the crimson crew gives technique and narrative realism,” explains Davies.

We are going to think about crimson teaming since it’s usually – not all the time – carried out in-house.

Some organizations make use of exterior crimson crew specialist companies; others have their very own in-house crew. “It will depend on the dimensions, danger profile, and maturity of the group. Enterprises with mature safety packages are investing in in-house crimson groups for steady protection and institutional information,” suggests Benishti.

That stated, he provides, “exterior crimson groups nonetheless play an important function – particularly for unbiased assessments, specialised experience, and to keep away from inner blind spots. A hybrid mannequin is rising: in-house groups for ongoing ops, exterior companions for recent views.”

Pablo Zurro, senior product supervisor at Fortra, provides, “Each are legitimate and complement one another. An inner crimson crew will be capable of run extra periodical workouts and check the weakest factors of the corporate whereas exterior consultants will simulate exterior attackers higher and can be capable of leverage their expertise and discovered classes in different prospects, which could be very helpful at the very least annually.”

Offensive safety also needs to hunt down workers almost certainly to be inclined to social engineering. “It’s obligatory since people are in all probability the weakest factors of the defensive chain,” continues Zurro, including, “It’s not obligatory to be aggressive and damage individuals’s emotions. Usually doing innocent phishing/vishing/smishing simulations is nice sufficient.”

Goncalo Magalhaes, head of safety at Immunefi, says, “Everyone seems to be inclined to social engineering. Offensive safety isn’t about figuring out ‘tender targets’ within the workforce; it’s about constructing a company-wide tradition the place everybody with entry to company methods adopts a safety mindset.”

With the rising sophistication and scale of AI-enhanced social engineering, this a part of offensive safety will grow to be more and more pressing and essential. 

The first objective of crimson teaming is to find how effectively the system can stand up to assaults. This implies crimson groups want real-time visibility throughout their complete ecosystem which implies each asset, pathway, and third-party connection that helps mission methods. “That features not solely {hardware} and endpoints but additionally functions, workloads, and APIs that usually function silent backdoors into vital methods,” says Christian Terlecki, director of federal at Armis.

However the velocity and scale of AI-assisted malicious assaults implies that future crimson teaming should grow to be automated and steady somewhat than periodic.

One other present evolution is into fixing somewhat than merely discovering weaknesses. “Hardly ever do crimson groups ‘personal’ remediation,” says Mullins.

“Historically, offensive [red] groups determine points; defensive [blue] groups repair them. However that wall is crumbling,” suggests Benishti. “Extra organizations now anticipate crimson groups to collaborate with blue groups to prioritize fixes, retest patches, and information remediation. Whereas offensive safety received’t absolutely ‘personal’ the repair, it more and more performs a hand in ensuring points are resolved – not simply reported.”. 

However collaboration by itself doesn’t resolve the standard drawback: crimson groups can generate large vulnerability lists that overwhelm engineering groups. “Discovering vulnerabilities is desk stakes. Fixing them mechanically – that’s the way forward for crimson teaming,” suggests Alex Polyakov, co-founder and CTO at Adversa AI.

“AI is starting to bridge the hole between figuring out and fixing points. What was separate steps can now occur in the identical workflow. AI methods can discover vulnerabilities, counsel secure fixes, and validate them,” agrees Wout Debaenst, AI pentest lead at Aikido Safety.

The function of AI in the way forward for offensive safety

Offensive safety suffers from the identical conundrum afflicting most areas of cybersecurity: there’s a rising want for extra output at a sooner tempo whereas corporations battle with an ongoing and worsening abilities scarcity, and tighter budgets to make use of the few out there.

Synthetic intelligence is the goose anticipated to supply the golden resolution: extra, sooner, higher, 24/7 automation – with fewer people required.

Would that life have been that easy!

Benefits of AI

Jason Soroko, senior fellow at Sectigo.

Jason Soroko, senior fellow at Sectigo, sees 4 major benefits provided by AI. First, “AI gives velocity and effectivity by processing and analyzing massive datasets a lot sooner than people, shortly figuring out potential vulnerabilities.” Second, “It enhances superior risk detection, as machine studying fashions can acknowledge complicated patterns and novel assault vectors that conventional strategies may miss.”

Third, “AI methods allow steady monitoring by working 24/7, offering fixed vigilance towards rising threats.” And fourth, he provides, “Useful resource optimization is achieved by automating routine duties, permitting human consultants to deal with extra complicated points that require human instinct and experience.”

Few individuals see AI changing crimson groups within the quick time period – however most settle for they are going to help the crimson groups. “We’ll see agentic AI functions operating crimson crew engagements, however the extra subtle and novel assaults will in all probability come from well-funded AI assisted groups that may (largely) all the time be able to beating the machines,” says Zurro.

“I don’t see a alternative within the mid-term, however extra a human/machine symbiosis that may increase the bar to a better degree,” he provides.

Polyakov is all in. “AI is exceptionally good at this work. Purple teaming requires creativity, pattern-breaking considering, and the power to strive 1000’s of unconventional assault paths. People get drained. AI doesn’t. People suppose linearly. AI explores in parallel.”

He provides, “Satirically, the identical ‘hallucination’ that creates issues in regular LLM utilization turns into a characteristic in offensive safety – it fuels novel assault concepts and surprising exploit chains when harnessed accurately by consultants. In crimson teaming, AI’s hallucinations aren’t bugs – they’re superpowers.”

Issues

“We nonetheless want human consultants to conduct complicated and complex operations, as gen-AI is kind of silly at these duties, and can in all probability stay the identical within the close to future,” warns Ilia Kolochenko, CEO at Immuniweb, and accomplice in cybersecurity at Platt Regulation LLP. “Whereas some distributors pompously promote ‘automated penetration testing’ or declare that their AI has changed human consultants, it’s technically inaccurate and incorrect, to place it mildly.”

He additionally raises regulatory issues. “In regulation, the notion of a penetration check stays fairly steady: involvement of impartial and certified human consultants.” He warns that offering regulators with a report generated by an AI instrument may result in penalties.

“One of many most important issues is the potential for AI methods to generate false positives or miss sure vulnerabilities that require human instinct and contextual understanding,” says Amit Zimerman, co-founder and CPO at Oasis Safety .”Moreover, AI methods have to be correctly skilled, which might be resource-intensive, and should not all the time account for the nuances of each distinctive surroundings or assault vector.”

Satirically, higher skilled crimson teaming AI additionally turns into a possible risk if unhealthy actors pay money for the AI. “That is notably vital in cybersecurity, the place instruments supposed to guard may very well be repurposed for malicious assaults. It’s essential that organizations undertake strict governance and moral tips when deploying AI in these contexts,” he warns.

Soroko provides the dependency danger. “Over-reliance on AI may diminish human experience and instinct inside cybersecurity groups.”

The usage of agentic AI will enhance, designed to reinforce the efficiency of the crimson crew. However agentic AI introduces a brand new assault floor that may be exploited by attackers.

For pentesting

AI guarantees a fast enhance to the pentesting aspect of offensive safety. It has the potential to search out vulnerabilities in code with out the need to know the enterprise context across the code. It additionally has the potential – sooner or later, we’re not there but – to repair the vulnerabilities within the code. However this implies it’s equally useful to any attacker capable of see the code.

“Nevertheless, gen-AI nonetheless lacks the contextual reasoning required to uncover unknown vulnerabilities or design bespoke assault paths. In consequence, human pentesters will proceed to be irreplaceable within the 12 months forward,” feedback Simon Phillips, CTO of engineering at CybaVerse.

AI can be getting used in-house to generate new code by vibe-coding. “This new period of constructing software program by AI is taking off in the present day, nevertheless it’s additionally a significant safety concern as a variety of the code is being created poorly by novice immediate engineers,” he continues.

The rising requirement for fast checks on in-house code earlier than it reaches manufacturing might combine into the continual perform of the crimson crew within the coming years, leaving exterior pentesting to bug hunters and periodic pentest engagements to fulfill compliance functions.

Meantime, “AI-driven SAST instruments will redefine code safety, detecting logic and architectural flaws that conventional scanners overlook. These instruments are quickly changing into indispensable for pentesters and DevSecOps groups, automating code evaluation and vulnerability discovery,” feedback Gianpietro Cutolo, workers risk analysis engineer at Netskope.

However he provides, “The offensive potential is equally important, demonstrated by the truth that an AI agent now holds the highest rank on HackerOne within the US, signaling a future the place each defenders and attackers leverage the identical clever tooling to outpace one another.”

Aikido’s Debaenst factors to analysis: “Ninety-seven p.c of organizations plan to undertake AI for pentesting, and 9 out of ten imagine it is going to ultimately take over many of the subject,” he says. “The shift is already underway.”

The way forward for AI and crimson teaming

“In 2026, AI will play a supporting function, serving to crimson groups work sooner and canopy extra floor. Nevertheless, it received’t exchange human researchers. As a substitute, we’ll see crimson teamers utilizing AI like a pressure multiplier that automates the fundamentals to allow them to deal with superior ways and deeper testing,” says Emmanouil Gavriil, VP of labs at Hack The Field.

On the similar time, he provides, “Purple teamers in 2026 will have to be extra adaptable than ever. Conventional exploitation abilities are not sufficient. The assault floor now consists of cloud methods, IoT gadgets, and AI-powered instruments, every requiring completely different abilities. The job is not about mastering one area, however studying to navigate many, and doing it repeatedly.”

Subho Halder, co-founder and CEO at Appknox, says, “By 2026, AI will automate many elements of offensive safety testing, operating simulations, probing for vulnerabilities, and flagging potential dangers at unprecedented velocity. Single-agent AI methods, able to reasoning, studying, and self-correcting, will execute subtle, repeatable assessments throughout massive codebases and environments.”

Immunefi’s Magalhaes summarizes the way in which ahead. “AI is rising as an extremely highly effective instrument, each for automating duties and amplifying what small groups can accomplish. In safety, meaning fewer individuals could also be wanted to ship sure companies. On the offensive aspect, we’re beginning to see early indicators of AI brokers that transfer sooner than human researchers and draw from broader information bases.”

So, sure, he continues, “AI brokers will rework offensive safety and risk looking; automation is a game-changer, however provided that it’s used together with people. The perfect utilization is for agentic methods to deal with steady automated testing whereas people present strategic oversight and catch the blind spots that even superior AI misses.”

The longer term for offensive safety

A lot of crimson teaming is being streamlined. That is mandatory merely by the expansion and velocity of assaults and the dimensions and complexity of the property that have to be defended.

“The offensive safety panorama is about to vary extra within the subsequent 24 months than within the final 10 years. In 2026, we’ll see the primary actual convergence: automated offensive testing that understands context, state, and enterprise logic, not simply endpoints. Suppose DAST that behaves like a inventive attacker – chaining vulnerabilities, exploiting misconfigurations, and validating influence the way in which a human red-teamer would,” says Alankrit Chona, CTO and co-founder at Simbian.

“Offensive and defensive safety will start to merge, creating an ecosystem the place AI-driven instruments probe methods repeatedly, uncovering weaknesses and hardening them in the identical cycle,” suggests Travis Volk, VP world expertise options and GTM Service at Radware. 

“The boundary between crimson teaming, penetration testing, and steady assurance will blur. The following part is pre-emptive safety, a everlasting state of validation,” says Bugcrowd’s Julian Brownlow Davies.

“Purple, blue and coverage groups working in isolation is not tenable; the gaps between them create blind spots that attackers readily exploit,” provides Merlin Gillespie, operations director at Cybanetix. “The concept that crimson teaming, blue teaming and coverage writing can reside in their very own discrete ivory towers is proving painfully outdated.”

A lot of the longer term for crimson teaming will rely upon how AI continues to evolve. It holds large promise however nonetheless suffers from points. The most important benefits will come from the usage of agentic AI – however there’s a battle of priorities right here. A major perform of agentic is the power to function autonomously with out human intervention.

Michael Adjei, Director of Methods Engineering at Illumio.

Usually of agentic use, the ultimate however logical step of impartial autonomous remediation is blocked. Persons are not able to relinquish final management. However will this final without end? AI has largely given attackers the benefit. They transfer sooner as a result of a mistake isn’t damaging. Defenders transfer extra slowly as a result of a mistake may very well be catastrophic to the enterprise. 

“There’s nonetheless an imbalance as attackers function with fewer constraints whereas defenders are tangled in information silos and compliance overheads,” feedback Michael Adjei, director of methods engineering at Illumio.

So, with threats rising sooner than defenders can react and remediate, will there come a time when enterprise is pressured to undertake agentic AI autonomous remediation from inside a single automated crimson/blue crew? That’s, in spite of everything, the Shangri-La of AI cybersecurity – a totally self-healing system.

It’s ironic that whereas AI is ready to see and analyze what is going on within the current, we stay fully in the dead of night over the place future AI could also be taking us.

Associated: Zero to Hero – A “Measured” Strategy to Constructing a World-Class Offensive Safety Program

Associated: FireCompass Raises $20 Million for Offensive Safety Platform

Associated: Purple Teaming AI: The Construct Vs Purchase Debate

Associated: How Do You Know If You’re Prepared for a Purple Workforce Partnership?

Security Week News Tags:Cyber, Insights, Offensive, Security

Post navigation

Previous Post: Mesh Security Raises $12 Million for CSMA Platform
Next Post: TP-Link Archer Vulnerability Let Attackers Take Control Over the Router

Related Posts

Hackers Exploit Zero-Day in Discontinued D-Link Devices Security Week News
API Security Firm Wallarm Raises $55 Million Security Week News
8 Cybersecurity Acquisitions Surpassed $1 Billion Mark in 2025 Security Week News
Vulnerability Exploitation Probability Metric Proposed by NIST, CISA Researchers  Security Week News
Surge in Cyberattacks Targeting Journalists: Cloudflare Security Week News
Chinese Hackers Target Chinese Users With RAT, Rootkit Security Week News

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News

Recent Posts

  • TP-Link Archer Vulnerability Let Attackers Take Control Over the Router
  • Cyber Insights 2026: Offensive Security; Where It is and Where Its Going
  • Mesh Security Raises $12 Million for CSMA Platform
  • Critical vm2 Node.js Flaw Allows Sandbox Escape and Arbitrary Code Execution
  • Why We Can’t Let AI Take the Wheel of Cyber Defense

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025

Recent Posts

  • TP-Link Archer Vulnerability Let Attackers Take Control Over the Router
  • Cyber Insights 2026: Offensive Security; Where It is and Where Its Going
  • Mesh Security Raises $12 Million for CSMA Platform
  • Critical vm2 Node.js Flaw Allows Sandbox Escape and Arbitrary Code Execution
  • Why We Can’t Let AI Take the Wheel of Cyber Defense

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News

Copyright © 2026 Cyber Web Spider Blog – News.

Powered by PressBook Masonry Dark