SecurityWeek’s Cyber Insights 2026 examines professional opinions on the anticipated evolution of greater than a dozen areas of cybersecurity curiosity over the subsequent 12 months. We spoke to tons of of particular person specialists to realize their professional opinions. Right here we look at understanding and managing the Exterior Assault Floor with the aim of evaluating what is going on now and getting ready leaders for what lies forward in 2026 and past.
Shadows are darkish and harmful locations the place unhealthy guys assault something or anybody they discover. In 2026, AI will enhance the quantity and dimension of shadows, along with all the exterior assault floor.
Exterior Assault Floor Administration (EASM) is the method of discovering and managing each asset a company exposes to the web. These belongings could also be identified (and subsequently documented and could also be secured) or unknown (and subsequently invisible and virtually actually insecure). Whereas EASM covers each classes, we’re primarily involved with the invisible belongings.
“This consists of domains, servers, APIs, and cloud belongings that might not be tracked internally,” says Chris Boehm, discipline CTO at Zero Networks. “It issues as a result of most firms do not need a whole stock of what’s seen from the skin, and attackers usually discover these gaps earlier than defenders do.”
Chris Boehm, discipline CTO at Zero Networks
EASM offers the stock. “The profit lies in publicity governance: accepting that not all danger will be eliminated, however via visibility, measurement and monitoring, there may be scope to prioritize and deal with danger in a method that helps enterprise alignment and accountability,” explains Dave McGrail, head of enterprise consultancy at Xalient.
The invisible exterior belongings are the simplest avenue for attackers to find and exploit. “By repeatedly discovering and prioritizing internet-facing companies, misconfigurations, expired certificates, dormant belongings and third-party exposures, EASM reduces the blind spots that result in breaches,” provides Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster College.
The difficulty is the fundamental asymmetry of cybersecurity: cybercriminals want solely discover one weak spot whereas defenders should be excellent on a regular basis, in all places. “EASM seeks to be the proverbial finger within the dyke of the group, frequently attempting to look at the corporate defenses and bear in mind when a system turns into susceptible to assault; so a mitigation will be utilized earlier than the attacker takes benefit of the weak spot,” says Dave Tyson, chief intelligence officer at iCOUNTER.
“EASM is solely the behavior of figuring out what the web says you’re operating, catching the weak [invisible] entry factors and shutting them earlier than another person walks in,” says Yaz Bekkar, principal consulting architect XDR at Barracuda Networks. “It’s a case of locking the entrance door earlier than you guard the vault.”Commercial. Scroll to proceed studying.
Steady growth the exterior assault floor
The dimensions of the hidden exterior assault floor is consistently increasing. The explanation will be discovered within the mixed nature of contemporary enterprise and trendy know-how. Expertise modifications quickly, and enterprise seeks to take benefit quickly – a minimum of, forward of its opponents to take care of, enhance, or acquire a aggressive edge.
The result’s that new know-how is deployed quicker than safety can react, and as of late, usually with out safety’s information.
“The assault floor retains increasing as cloud and distant work make it simple for groups to deploy new companies with out central oversight. Builders typically create their very own environments to check or deploy functions, and these can sit outdoors safety’s visibility,” says Boehm.
“The floor grows as a result of organizations add cloud companies, APIs, SaaS apps, IoT gadgets, developer environments, CI / CD pipelines and third-party integrations quicker than they’ll stock and safe them,” provides Curran. “Shadow assets (short-term dev / check cases, forgotten domains, contractor entry) and the shift to edge and hybrid cloud imply it can hold increasing – particularly throughout multi-cloud endpoints, API ecosystems and associate / TTP integrations.”
However “the one best driver to the increasing assault floor of any group is, sarcastically, not of their management – it’s within the management of their third events and provide chain companions who’ve shared two method information connections between them,” warns Tyson.
These trusted pathways have modified the algorithm of danger every defender faces. “Think about,” he explains, “a company, linked to 300 firms. Every a kind of firms is being scanned, probed, and attacked on daily basis, with the only real aim of discovering a connection to your organization – via the trusted connection in place, thereby avoiding vital scrutiny.”
Immediately, he continues, “That is potential as a result of the cyber attacker has embraced the AI benefit of enumerating a goal firm’s trusted connections and conducting reconnaissance on them in close to actual time, on daily basis. The adversary AI can discover the precise listing of firms to assault every day, they usually can know precisely which assault strategies are principally more likely to succeed.”
Aimee Cardwell, CISO in residence at Transcend
Two different areas are worthy of be aware: acquisitions, and our fast adoption and deployment of AI. “Acquisitions are notoriously exhausting to safe. As quickly because it’s introduced, you’re a goal, and firms hardly ever have the self-discipline to consolidate duplicate methods rapidly. They’re trying to reveal monetary synergies from the acquisition first,” feedback Aimee Cardwell, CISO in residence at Transcend.
She additionally provides, “It’s going to shock nobody that in 2026, the assault floor will develop primarily round AI. Corporations try to undertake AI instruments with out understanding the place their information goes or how fashions are being educated. Every new AI app turns into one other entry level, and worse, most organizations have zero visibility into what number of staff are importing information to ChatGPT or related shopper instruments.”
Raj Mallempati, CEO and co-founder at BlueFlag Safety, expands on this. “The assault floor is exploding enterprise broad as each division adopts AI brokers. Advertising and marketing makes use of AI for content material era, gross sales for lead qualification, operations for course of automation.”
Alex Polyakov, co-founder and CTO at Adversa AI, agrees. “Sure, the assault floor is exploding – and AI brokers are the rationale. They’ll reside in all places: on workstations, in agentic browsers, in SaaS apps, and finally as enterprise-wide autonomous agentic AI methods. On this world, the idea of a fringe disappears fully.”
Pascal Geenens, VP of cyber menace Intelligence at Radware, continues, “Subsequent yr, enterprises will face a brand new form of visibility disaster as AI brokers begin forming their very own community of connections. These autonomous integrations will create an agentic ecosystem, a hidden layer of APIs, plug-ins, and context suppliers working past conventional controls… The agentic companies ecosystem is a quickly increasing constellation of third-party modules, plug-ins, and AI service connectors. It’s going to mirror the software program provide chain disaster that emerged with open-source dependency assaults.”
However the essential growth – and highest danger, says Mallempati, “is within the SDLC, the place AI brokers aren’t simply processing data however actively creating and deploying code. By 2026, we predict autonomous brokers will contact 60-70% of enterprise code. Not like a compromised chatbot that may leak buyer information, a compromised improvement AI agent can inject backdoors into your whole product, modify infrastructure, or expose your full IP. The event surroundings is the place AI brokers have probably the most privileged entry and the least governance.”
Whereas the AI growth of the exterior menace floor will likely be intensive, not everybody thinks it’s essentially uncontrollable. “If the chance is managed appropriately with mature processes and controls – together with via identification administration, just-in-time entry and automatic anomaly detection – there’s no purpose to conflate a bigger assault floor with elevated danger,” says McGrail.
The shadowy floor
Shadows are part of enterprise. Workers will quickly use new companies with out company oversight or information in the event that they really feel it makes their work extra environment friendly. The time period most often refers back to the actions of people however may also contain the follow of inner groups or the corporate itself.
“Shadows are a serious drawback as a result of they bypass governance, logging and patching,” factors out Curran. “All shadow platforms are a danger due to their unknown, unmaintained web publicity,” provides Bekkar. The newest addition to the menagerie is shadow AI.
“Shadow AI exists throughout the enterprise, however shadow AI in improvement is uniquely harmful. Whereas shadow AI in advertising and marketing may generate unapproved content material, shadow AI in improvement can entry manufacturing methods, leak supply code, or introduce vulnerabilities that have an effect on hundreds of thousands of customers,” warns Mallempati.
“Shadow AI is a much bigger danger than shadow IT ever was as a result of it entails sending delicate information to opaque exterior APIs with no controls. Somebody exports person information to Excel, then uploads it to ChatGPT for evaluation. Immediately you have got regulated information in a third-party system, and when a breach occurs, you’ll be able to’t decide scope since you by no means knew the info was there,” warns Cardwell.
“Shadow AI will seemingly change into a bigger challenge given the proliferation of AI platforms and the time it can take organizations to get efficient AI governance in place. Since just about all AI methods leak information, no matter staff load into them is in danger,” provides Tyson.
“These dangers are blind spots of potential safety vulnerabilities – they’ll result in information breaches via improper dealing with of delicate data by unapproved AI fashions, probably exposing mental property or confidential information,” warns Melissa Ruzzi, director of AI at AppOmni. “Moreover, unauthorized AI utilization the place firm data was shared will be exploited to craft refined phishing assaults and even generate disinformation campaigns,” she continues.
“In the end, this causes a skewed view of danger, placing compliance and enterprise resilience in jeopardy,” provides McGrail.
MCP Dangers
Natalie Walker, VP at NCC Group
Shadow Mannequin Context Protocol (MCP) servers in improvement environments are notably insidious. “Builders are spinning up unauthorized MCP servers that join their IDEs on to AI fashions, granting these connections entry to whole codebases, credentials, and infrastructure. We’re seeing builders grant AI brokers broad permissions ‘quickly’ for debugging that by no means get revoked,” says Mallempati.
“Shadow MCPs are a major problem. Unmonitored or unauthorized MCP servers usually emerge as builders experiment with AI brokers – they create blind spots the place autonomous methods can write, modify and deploy code with out safety oversight,” warns Shahar Man, co-founder and CEO at Backslash Safety.
“MCP can convey much more advanced challenges, exposing delicate information, inflicting unauthorized automation, and escalating privileges with out oversight,” provides Natalie Walker, VP at NCC Group.
Shining a light-weight into the shadows
Managing shadows requires two issues, suggests Cardwell. “First, automated discovery – handbook surveys don’t work as a result of individuals both don’t notice what they’re doing is dangerous, or they’re not incentivized to inform you. You want instruments that scan community visitors and API calls to catch unseen AI utilization.”
Second, she continues, “Present higher options. After I discover shadow AI, I begin by asking what hole the customers have been attempting to fill. Then both give them an permitted instrument that does the identical factor, or work with them to convey their system into compliance. Banning instruments with out providing options simply drives habits additional underground. People try to do the suitable factor – how can we allow them to do this safely?”
McGrail provides, “There’s a superb stability between the shadow danger and the chance of not permitting a stage of enterprise agility.”
Assaults in opposition to the exterior assault floor
In 2026, “Attackers will leverage context poisoning by embedding malicious behavioral patterns or manipulative datasets into AI service configurations that persist throughout deployments. This can set off AI-native provide chain breaches, the place enterprises unknowingly combine compromised agentic companies that manipulate autonomous choice chains, exfiltrate delicate data, or subtly bias enterprise logic,” warns Geenens.
Organizations have moved essential enterprise processes to SaaS functions in quest of agility, scalability and effectivity. In lots of circumstances, acceptable safety controls haven’t adopted. “Attackers perceive this and are more and more benefiting from the chance by breaching organizational SaaS tenants. They’ll proceed exploiting this shift utilizing strategies equivalent to phishing, credential stuffing / spraying, session hijacking, and token theft to realize unauthorized entry to identification suppliers and SaaS environments,” says Brian Soby, CTO and co-founder at AppOmni.
He provides, “The widespread use of SaaS additionally introduces dangers from misconfigurations and overly permissive entry, which attackers will proceed to use for lateral motion and information theft.”
2026 can even mark the second when zero click on assaults transcend the human layer altogether. “We’ll see the rise of AI-to-AI assaults, by which malicious autonomous brokers goal professional company AI methods, exploiting APIs, mannequin context protocols and SDK integrations,” says Rob Juncker, CPO at Mimecast. “The result’s an assault floor that multiplies exponentially, usually with out a single alert or human noticing.”
Shahar Man provides, “Subsequent yr, we’ll see the primary large-scale breach originating from an MCP. A backdoor or provide chain poisoning assault will quietly embed malicious code into enterprise environments, spreading via AI-driven improvement workflows earlier than anybody detects it. When this breach involves mild, it can expose how deeply enterprises have trusted these brokers with out adequate oversight.”
IPv6 is one other space more likely to be attacked in 2026 – adoption is advancing quicker than the visibility tooling required to safe it. Conner Strains, CTO at SixMap, warns, “In 2026 a few of the most extreme breaches will originate from belongings that exist solely within the IPv6 dimension of enterprise infrastructure – companies introduced on-line for modernization, compliance, or value causes, however by no means absolutely built-in into exterior attack-surface administration.”
He provides, “Any visibility stack that fails to deal with IPv6 as a first-class exterior publicity area will likely be working blind the place attackers have already got line of sight.”
The OSS provide chain must also be thought-about a part of the exterior assault floor for the reason that preliminary assault is in opposition to repositories outdoors of safety’s purview.
“Adversaries are already enjoying the lengthy sport, contributing professional code to open-source software program tasks, constructing belief inside developer communities and ready for the suitable second to strike,” warns Keith McCammon, co-founder and Chief Safety Officer at Purple Canary (acquired by Zscaler). “The aim gained’t be a single breach, however systemic leverage. One compromise in a broadly used dependency may ripple throughout hundreds of organizations in a single day.”
Keith McCammon, co-founder and Chief Safety Officer at Purple Canary
As a substitute of spraying exploits throughout hundreds of targets, adversaries will compromise a single trusted dependency to achieve many. With most open-source tasks maintained by small groups or particular person builders, usually with out safety oversight, the assault floor has by no means been extra uncovered – or extra tempting.
He provides that belief turns into probably the most exploited vulnerability in 2026. “Organizations should confirm not simply who accesses their methods, however what code they run. Realizing the origin, integrity, and construct course of of each element will change into a baseline requirement, as a result of in 2026, belief turns into the exploited vulnerability.”
Martin Reynolds, discipline CTO at Harness, agrees with this evaluation. “Many enterprises will say they’ve learnt provide chain safety classes after 2023’s SolarWinds breach – however that doesn’t imply their AI has. With AI increasing software program provide chain quantity and complexity, related incidents change into extra seemingly and extreme, as a single compromised element may cascade throughout hundreds of enterprises.”
In 2026, he provides, “scalable provide chain safety will change into non-negotiable. Software program composition evaluation should scan each dependency, SBOMs should be maintained in actual time, and remediation must be automated.”
Managing the exterior assault floor
The overarching perception is that AI will play a pivotal function in securing the exterior assault floor sooner or later – whether or not that’s machine studying at the moment or agentic AI tomorrow.
“AI already provides worth by processing giant quantities of discovery information and highlighting the belongings most definitely to pose danger. It helps groups focus quicker, not by performing by itself, however by turning hundreds of potential points into just a few clear priorities,” says Zero Networks’ Boehm. “Full automation, the place AI methods can confirm possession and shut down dangerous exposures, remains to be just a few years away.”
John Bruggeman, a digital CISO with CBTS, agrees with the usage of AI. “AI, within the type of machine studying, can assist detect new exterior belongings – like shadow IT – by consistently scanning your community for brand new servers – like distant desktop servers – or new SaaS functions along with your area identify. There are companies that do this now, however there are sometimes false positives – detected belongings that the service thinks are yours, however should not. ML can assist weed out the false positives and make exterior discovery of recent belongings extra correct, in order that much less handbook evaluation is required.”
He additionally suggests different potential approaches. “One approach to detect shadow IT current in your surroundings is to observe company e-mail. If departments are utilizing shadow IT, odds are they’re utilizing their company e-mail account. One other method is to observe community visitors on the firewall. AI can be utilized to sift via your community visitors and discover SaaS functions that IT doesn’t learn about. As soon as you know the way massive the issue is, you can begin to handle it.”
Tim Chang, VP of utility safety product administration, cybersecurity and digital identification at Thales, warns, “The assault floor isn’t merely ‘rising’, it’s fragmenting into hundreds of dynamic entry factors. On this panorama, defending APIs and functions strikes from finest follow to existential necessity. By 2026, bot protection will shift from passive detection to lively disruption to identify intent, fingerprint habits, and intercept malicious automation earlier than it ever reaches the applying layer.”
He continues, “Organizations will make investments closely in runtime bot analytics, anomaly detection, and AI-against-AI countermeasures as bot-driven fraud, credential abuse, and API exploitation surge. APIs, the convergence level for people, machines, brokers, and gadgets, will lastly obtain the scrutiny they’ve lengthy deserved.”
And concludes: “Corporations that elevate API safety and harden internet functions in opposition to AI-powered bots will cut back outages, shield delicate information, and safeguard buyer belief and expertise. People who don’t will discover themselves going through an adversary that by no means sleeps, by no means slows, and learns from each single try.”
NCC Group’s Walker sees the rise of agentic AI coming to EASM. “Not like conventional AI that primarily responds to instructions, agentic AI consists of autonomous brokers that may make their very own choices.”
There may be an enterprise-wide uptake in agentic AI. It’s going to introduce extra automation and require much less human intervention. “Rising autonomous EASM ecosystems will orchestrate discovery, prioritization and patching, complemented by steady red-teaming and assault simulation,” she says. “However the overwhelming majority of settings will nonetheless require human oversight and perception earlier than any real-time remediation.”
Professor Curran helps the usage of ML / AI. “It may well velocity asset discovery, cut back false positives, correlate alerts (DNS, certs, telemetry) and predict which exposures are most definitely to be exploited. Behavioral fashions assist detect anomalous modifications to public-facing belongings. AI additionally helps automate prioritization and generates contextual remediation playbooks, although human validation stays important the place danger choices are delicate.”
Barracuda Networks’ Bekkar continues the AI theme. “Defenders want to make use of AI because the engine of EASM, not as a sidekick. Let it repeatedly uncover internet-facing belongings, determine in the event that they belong to the group, and use pattern-matching to identify look-alike domains. Organizations can leverage AI to take away noise by grouping duplicates and apparent false positives, then rating what’s left by danger stage: how simply the uncovered asset may result in identification or information entry.”
He believes the routine stuff will be automated: “Expire check subdomains, shut orphaned buckets, revoke stale tokens, however guarantee there’s a human within the loop for something delicate.”
Sheetal Mehta, head of cyber safety at NTT Knowledge, tasks past ML / AI to the agentic AI. “With the introduction of AI and agentic AI, EASM may quickly transfer to steady monitoring – mapping and inferring connections between IP, provide chains, domains and cloud cases to seek out shadow IT that’s ordinarily missed – or higher nonetheless, studying patterns to detect uncommon exercise and act to rapidly mitigate and assist safety groups higher prioritize efforts.”
Not everyone seems to be absolutely offered on AI. “It may well assist with automated discovery and classification throughout your surroundings, nevertheless it’s not a silver bullet. It’s most helpful for surfacing the place delicate information lives repeatedly moderately than simply throughout annual audits,” says Transcend’s Cardwell.
“The excellent news is that there are actually nice instruments being developed to cut back this danger. Can we purchase and implement these instruments as rapidly because the menace actors can use AI to seek out new chinks in our armor? I’m ‘glass half empty’ on that. However I do suppose this can be a place for CISOs to put money into 2026.”
It is very important do not forget that what’s sauce for the defending goose can also be sauce for the attacking gander. If defenders can use AI to find their exposures, so can also, and can, attackers do the identical. It is going to be a race, however the major benefit for the defender is bigger situational context. Attackers and defenders will each discover the exposures, however defenders will higher perceive the essential exposures to prioritize.
iCOUNTER’s Tyson has a further suggestion designed to counter third celebration danger. He suggests widening viewpoints to incorporate all the enterprise ecosystem, and monitoring each essential group for lively compromise. “This manner,” he says, “organizations can perceive the chance uniquely associated to them from the whole thing of their linked companions.”
When you want to monitor the exterior assault floor, it is advisable embody your linked companions, he provides. “In at the moment’s world, cybercriminals have merely expanded the assault floor to third and 4th events, and ecosystem compromise monitoring is the final word instrument in redefining the brand new expanded assault floor.”
Last ideas
“Exterior assault floor administration will stay a essential, however more and more advanced, challenge in cyber safety within the yr forward, largely as a result of organizations have misplaced management of their environments,” warns Simon Phillips, CTO of Engineering at CybaVerse.
Management has been misplaced as a result of enterprise strain and the necessity for agility to remain forward of the competitors ends in new know-how being adopted quicker than safety can apply governance. This consists of the fast adoption of SaaS options, the private use of shadow IT, and the unsanctioned rise of shadow AI by people and builders downloading undisclosed copies of MCP.
AI is the double-edged sword within the image. It’s going to help firms to find their exterior assault floor, however it can additionally help unhealthy actors in finding and attacking the weak factors.
The probability for 2026 is that the battle between attackers and defenders will enhance in dimension, complexity and velocity – with no signal of any lower.
Associated: The Wild West of Agentic AI – An Assault Floor CISOs Can’t Afford to Ignore
Associated: CSA Unveils SaaS Safety Controls Framework to Ease Complexity
Associated: The Shadow AI Surge: Examine Finds 50% of Staff Use Unapproved AI Instruments
