I’ve been a CISO for two separate companies, know several CISOs personally, and interact with many others through various cybersecurity forums. We all have one thing in common. We can tell you our patching SLA numbers off the top of our heads. Ninety-five percent of criticals closed in 14 days. Eighty-something on highs. The board slide is green. The auditors are satisfied. The client questionnaires come back clean.
Then I ask a different question: what still needs to be done? And the tone shifts from the confident “We’ve got it all covered” to “Wellll… we’ve got some legacy tech debt holding us back.”
What they’re really saying, when someone’s been in the role long enough to stop performing, is usually some version of this: the stuff we closed fast was the stuff that was cheap to close. The stuff that’s still open is the stuff that would require us to re-architect a service, take a critical system offline or fight with a business owner who doesn’t want to hear it. So, we keep closing easy criticals to keep the dashboard green, and the hard problems age quietly in the backlog where no one looks.
This is the part of vulnerability management nobody wants to say out loud: we have built an entire governance industry around measuring the wrong thing. SLAs tell you how disciplined your ticketing process is. They tell you almost nothing about your actual risk.
The compliance trap
I’ve watched this pattern play out across enough programs to be confident it’s not an outlier. An organization commits to a thirty-day SLA for critical vulnerabilities. The vulnerability management team gets measured on that SLA. So, they get very, very good at hitting it — for the vulnerabilities that are easy to hit it on.
What gets closed fast: anything an agent can patch remotely. Anything in a containerized workload that rebuilds nightly. Anything where the vendor has already shipped a clean update and the change advisory board will approve it without debate.
What doesn’t get closed: the legacy ERP module that can’t be patched without breaking three downstream integrations. The embedded system in the warehouse that runs an operating system whose vendor went out of business in 2019. The Windows 2000 server under the sysadmin’s desk who’s been at the company since 1995. The misconfiguration in the core identity provider that, if changed, would lock out a business unit for a day while someone rebuilds their SSO flow. The architectural flaw in the authentication layer that’s technically a CVSS 7 but practically an existential exposure because it sits in front of the crown jewels.
Those don’t move. They get relegated to the Island of Misfit Risks — exception queues, risk-accepted trackers, or the backlog of whatever team has owned the system or function since 2017. And the SLA report stays green because the denominator is dominated by the easy stuff, and the business has “accepted the risk.”
I’ve been the person who had to explain to a board that our SLA compliance was ninety-four percent and our biggest single point of failure was in the six percent. It is not a fun conversation. It is, however, the correct one. And the reason it almost never happens is because the entire reporting apparatus is structured to make it invisible.
SLAs measure discipline, not risk
Here’s the mental model I’ve been pushing with my peers. Think of patching SLAs the way you think of fire drills. Fire drills are necessary. They prove that, on a predictable cadence, your organization can execute a known procedure. No one in charge of a building full of people would claim that a successful fire drill means the building is safe. They would tell you the building is safe when the sprinklers, the structural design, the exits and the materials all hold up to a scenario you didn’t script.
Patching SLAs are fire drills. They prove your program can execute a known procedure on a predictable cadence. They do not tell you whether you’re protected against the scenario you didn’t script — the chained exploit path, the misconfigured trust boundary, the control that looks present in the GRC tool but has been silently failing for eight months, or my favorite, “that control works for everywhere except XYZ business unit.”
When I ask a CISO how much cyber risk they have, they struggle to articulate it. They talk about the attack surface, the number of vulnerabilities, an audit score. Rarely do I hear them say something like, “We have $252M in cyber risk.” What if we as CISOs could articulate how much risk we have in terms of dollars and finally be able to build a business case for solving those hard vulnerabilities, misconfigurations and control breakdowns instead of trying to sell fear, uncertainty and doubt?
That question has an answer, and it’s one the FAIR Institute has been formalizing for years: cyber risk quantification, or CRQ. I’m not here to evangelize a methodology. There are several — some more defensible than others — and the specific choice matters less than the discipline of forcing vulnerabilities, misconfigurations and control gaps into loss-exposure terms rather than severity labels. When I tell a CFO that an unpatched CVSS 9.8 exists on a server, their eyes glaze over. When I tell them we have an estimated twelve-million-dollar annualized loss exposure concentrated in one unremediated architectural flaw, we have a very different conversation — and, in my experience, a very different remediation budget.
Three shifts that actually move the needle
If you’re a security leader trying to pull your program out of SLA theater and into something that reflects real risk, here’s what I’ve seen work.
Treat the SLA as the floor of what’s required, not the ceiling of what’s reported. Continue to meet your contractual and regulatory SLA commitments — they exist for good reasons and customers ask about them. But stop presenting SLA compliance as the headline metric of your vulnerability management program. It’s a hygiene measure. Put it on a hygiene slide. The headline metric should be the trend of your quantified residual risk; broken out by the business services it threatens.
Make your exception process produce better decisions, not just documented ones. In most of the programs I’ve reviewed, the risk acceptance process is a filing exercise with risks living on the register for years. Someone signs a form, the ticket gets closed as “risk accepted,” and the exposure disappears from the SLA report until the exception expires in the GRC tool next year. That’s not risk management. That’s paperwork. A functional exception process requires the business owner to see the quantified loss exposure they’re accepting, agree to revisit it on a defined cadence and — for the biggest exposures — commit to a remediation plan with a funded timeline. Research from Verizon’s 2025 DBIR found that among the edge device vulnerabilities featured in the report, the average time to patch was 209 days while attackers’ median time-to-exploitation was five days — a gap that exists because the fixes live where change is hardest. That same pattern shows up in CISA’s Known Exploited Vulnerabilities catalog, populated largely with CVEs that had patches available long before they were used in the wild. That isn’t a patching problem. That’s an exception-hygiene problem.
Fund remediation the way you fund other capital and opex projects. The hard vulnerabilities — the ones that require re-architecting a service, replacing an end-of-life platform or rebuilding an identity flow — aren’t going to get solved out of the quarterly operational budget. They require capital and opex investment, and they compete with every other business investment. Quantified risk is what lets you compete on equal terms. “We need to rebuild the authentication layer because it’s old and unsupported” will lose that fight eight out of 10 times. “We need to rebuild the authentication layer because it represents a $10M cyber risk exposure, and a $1M investment to remediate will reduce our cyber risk by a net of $9M” wins it often enough to matter.
One final note, because I get this question every time I talk about CRQ with a skeptical audience. Your loss-exposure estimates are going to be imprecise. The inputs are uncertain, the ranges are wide and any honest quantification exercise produces results that could be picked apart by a determined critic. That’s fine. A CFO or an actuary can argue the number is $8M rather than $10M — still fine. At least you have a number people can anchor to, versus the old patching scorecard that said everything was rainbows and unicorns.
A green SLA dashboard tells an executive that their security team is disciplined. A quantified risk picture tells them where their actual exposure is and what it would cost to reduce it. One of those conversations gets the hard stuff fixed. The other one gets it documented and forgotten.
SLAs are the floor. If that’s all you’re standing on, you’re closer to the ground than you think.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
No Responses