CISOs advised to rethink vulnerability management as exploits sharply rise

Tags:

Enterprise attack surfaces continue to expand rapidly, with more than 20,000 new vulnerabilities disclosed in the first half of 2025, straining already hard-pressed security teams.

Nearly 35% (6,992) of these vulnerabilities have publicly available exploit code, according to the Global Threat Intelligence Index study by threat intel firm Flashpoint.

The volume of disclosed vulnerabilities has more than tripled while the amount of exploit code has more than doubled since the end of February 2025 alone.

These increases make it no longer feasible for most organizations to triage, remediate, or mitigate every vulnerability, Flashpoint argues, suggesting enterprises need to apply a risk-based patching framework. But some experts quizzed by CSO went further — arguing a complete operational overhaul of vulnerability management practices is needed.

Risk-based patching: A rising necessity

Josh Lefkowitz, CEO of Flashpoint, says that surges in disclosed vulnerabilities and publicly available exploit code reflect a shift in the threat landscape.

“Attackers are operationalizing exploits as soon as vulnerabilities surface, often within hours, and well before defenders can access reliable data from public sources,” Lefkowitz tells CSO.

The widening gap between exposure and response makes it impractical for security teams to rely on traditional approaches. The countermeasure is not “patch everything faster,” but “patch smarter” by taking advantage of security intelligence, according to Lefkowitz.

Enterprises should evolve beyond reactive patch cycles and embrace risk-based, intelligence-led vulnerability remediation. “That means prioritizing vulnerabilities that are remotely exploitable, actively exploited in the wild, or tied to active adversary campaigns while factoring in business context and likely attacker behaviors,” Lefkowitz says.

Focus on exploitable vulnerabilities

Third-party security experts agree that enterprises need to apply a risk-based patching framework.

“Organizations that try to patch everything are fighting an impossible battle,” says IEEE senior member Shaila Rana. “But the silver lining here is that this shift is actually forcing smarter and more strategic approaches to emerge.”

Rana adds: “This pressure is creating better risk-based frameworks that help teams focus their limited resources on prioritized areas, or what matters most.”

Hüseyin Can Yüceel, security research lead at security validation company Picus Security, says that although the growing volume of vulnerabilities disclosed may be daunting, only some will affect any particular enterprise.

“You may not even own the product with a vulnerability or already have some security mitigations in place to prevent exploitation,” Yüceel explains. “The most important thing is deciding what’s relevant and important to you, which is why prioritization based on context is important.”

Yüceel adds: “A risk-based approach helps organizations focus on the threats that will most likely affect their infrastructure and operations. This means organizations should prioritize vulnerabilities that can be considered exploitable, while de-prioritizing vulnerabilities that can be effectively mitigated or defended against, even if their CVSS [Common Vulnerability Scoring System] score is rated critical.”

Relying on CVEs alone is becoming ‘untenable’

Security teams relying heavily on public sources of vulnerability intelligence, such as the Common Vulnerabilities and Exposures (CVE) and the National Vulnerability Database (NVD), are at a severe disadvantage, Flashpoint warns. The average latency between CVE publication and NVD enrichment now spans into weeks and months — creating critical intelligence gaps.

The instability in the CVE program funding earlier this year creates additional doubts.

“Relying solely on CVE or the National Vulnerability Database has become untenable,” Flashpoint’s Lefkowitz says. “The delays, inconsistencies, and persistent backlog mean that critical intelligence often arrives after attackers are already active, leaving defenders blind to high-risk exposures.”

Tyler Reguly, senior manager of security R&D at offensive and defensive security firm Fortra, dismissed concerns that relying on public sources of vulnerability intelligence puts organizations at a “severe disadvantage” as overblown. Vendor reports, exploit databases, and the CISA Known Exploited Vulnerabilities (KEV) list all offer valuable sources of intelligence, Reguly says.

“The reality is that public sources of intelligence are critical to managing your vulnerabilities,” Reguly argues. “That’s not to say that proprietary data isn’t beneficial, but there’s a lot of data out there available for anyone to gather.”

Rana argues that public vulnerability intelligence works best when combined with contextual threat intelligence and business risk assessments.

“Smart organizations are layering CVE data with real-time threat intelligence to create more nuanced and actionable security strategies,” Rana says. Instead of abandoning these trusted sources, effective teams are getting better at using them as part of a broader intelligence picture that helps them stay ahead of the threats that actually matter to their specific environment.

Third-party risk — yet again

Galit Lubetzky Sharon, CEO at application attack surface protection firm Wing Security, says that the surge in vulnerabilities and exploit code is only part of the problem.

“Enterprises increasingly depend on third-party SaaS vendors that dictate the patching cycle —when those vendors patch slowly or fail to disclose, customers inherit the risk blindly,” Sharon says.

AI is amplifying this threat: Attackers weaponize exploits at unprecedented speed, while “SaaS vendors race to release AI features often without mature security controls,” according to Sharon.

“The real challenge isn’t just keeping pace with patches but gaining visibility into third-party risk — making continuous SaaS, AI, and general third-party security essential,” Sharon concluded.

AI simplifying exploit development

Rami Sass, CEO at application security firm Mend, says the time between vulnerability discovery and exploitation has shrunk from weeks to days if not hours over the past two years, partly because of the increased abuse of AI technologies by attackers.

There are three main drivers behind increasingly turbulent threat landscape, according to Sass:

Better tools to discover vulnerabilities, especially in legacy code

A hungry and growing commercial market for exploits

AI tools are making the production of exploits easier

“Attackers are now using AI to move faster than defenders,” says Federico Simonetti, CTO at zero knowledge networking firm Xiid. “AI is highly effective at finding vulnerabilities and crafting exploits, while at the same time, it’s horribly ineffective at applying any significant level of protection.”

Exposure management

Peled Eldan, head of research at cloud security firm XM Cyber, believes the surge of vulnerabilities and exploits is a “byproduct of sprawling cloud estates, rapid migrations, deployment mishaps, misconfigurations, and more.”

“While the NVD is still a foundational pillar of cybersecurity, SOC teams need far more than CVE IDs and CVSS scores to meaningfully reduce risk,” Eldan says. “Even if NVD enrichment speeds up, it won’t fix the bigger problem: understanding how vulnerabilities connect with other exposures to create exploitable attack paths.”

This dynamic is fueling vulnerability management’s evolution into exposure management, which treats identity issues and misconfigurations as seriously as code flaws.

“When paired with attack surface tools such as breach simulations, pen tests, and red teaming, companies can build attack graphs that visualize how an adversary could reach crown-jewel assets,” Eldan explains. “Attack graphs are often used in conjunction with digital twins to simulate and validate that a given remediation strategy eliminates the exposure.”

Navigating a minefield with a pogo stick

Ivan Milenkovic, vice president of risk technology for EMEA at cloud security vendor Qualys, says the idea that any organization could, or should, attempt to patch every vulnerability was “always a fallacy.”

“The explosion in disclosures and the glaring unreliability of public feeds like the NVD haven’t created a new problem; they’ve simply exposed the intellectual bankruptcy of the traditional approach,” Milenkovic tells CSO. “Relying on CVSS scores and chasing CVEs is like trying to navigate a minefield with a pogo stick.”

Rather than relying on a risk-based patching framework, enterprises need a complete operational overhaul based on a continuous threat exposure management (CTEM) program, Milenkovic advises.

Frameworks like Gartner’s CTEM provide security operations center teams with a road map on how to mature their processes to prioritize exposures based on exploitability and business impact — not just raw severity scores.

“The fundamental question isn’t, ‘Is this vulnerability severe?’ But rather, ‘What is the value at risk to the business, and what is the most capital-efficient way to reduce it?’” Milenkovic, a former CISO, explains.

The objective of what Ivan Milenkovic describes a “risk operations center” approach based on the CTEM concept is to align security with genuine business outcomes.

“Your goal is to use a money-minded framework to surgically remediate the 2% of vulnerabilities that pose over 90% of the actual, material risk to your organization,” Milenkovic says.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *