Security teams live in a world of numbers. Dashboards depict counts of blocked attacks, phishing clicks, vulnerabilities discovered, patches applied, alerts triaged, and incidents closed. Over the past decade, the cybersecurity industry has become adept at measuring activity with increasing precision.
Experts say what remains far less consistent is whether those measurements help boards govern risk. For directors and senior executives, the purpose of security metrics reporting is not to catalog effort. It is to understand exposure, trajectory, and consequence.
Decision-makers want to know whether risk is increasing or decreasing, whether controls are effective, and whether the organization can limit damage when prevention fails. Metrics are therefore useful when they clarify those questions.
“Time is really the universal metric because everyone can understand time,” Richard Bejtlich, strategist and author in residence at Corelight, tells CSO. “How fast do we detect problems, and how fast do we contain them. Dwell time, containment time. That’s the whole game for me.”
Organizations cannot prevent every intrusion, Bejtlich argues, but they can measure how quickly they recognize and contain one. That measure translates across technical and nontechnical audiences because it speaks directly to impact. Detection and containment speed function as proxies for business loss avoided.
Financial exposure vs. operational clarity
Mike Hamilton, CTO of Pisces International, frames board-level security reporting strictly through a fiduciary lens. In his view, metrics matter only insofar as they map directly to financial consequence.
“First of all, the board only cares about money,” Hamilton tells CSO. “They don’t care about scary Russian cyber buffer overflow stuff. They care about money.”
“While the CISO may be interested in metrics like mean time to detect, mean time to respond, things like that, boards are charged with protecting enterprise value. Detection speed, vulnerability management, and phishing resilience matter more to them because they limit financial loss, regulatory exposure, and operational disruption,” he says. “What they really want to know is how we are lowering the likelihood of those bad outcomes that affect the business.”
Bejtlich, on the other hand, argues that boards can engage with a wide range of operationally grounded, governance-relevant metrics, including the number of intrusions over a given period. Those figures become meaningful when paired with consequence. “Was it a breach, or was it simply unauthorized access with no consequence?” Bejtlich says.
“I’ve just never had that experience where I felt like boards couldn’t handle anything that I was trying to describe to them,” he adds. “The problem becomes one of, if you’re speaking to them in technical terms for which they have no background, that’s not really going to help.”
The seduction of counting
Even when metrics are not too technical and align with business impact, another problem emerges: What gets counted can crowd out what matters.
Wendy Nather, a longtime CISO who is now an advisor at EPSD, cautions against equating measurement with understanding. “When you are reporting to the board, there are some things you just cannot count that you have to report anyway,” she tells CSO.
She points to incidents, near misses, and changes in assumptions as examples. “Anything that changes your assumptions about how you’re managing your security program, you should be bringing those to the board, even if you can’t count them,” Nather says.
Regular metrics can create a rhythm of predictability, and that predictability could lull board members into a false sense of security. “Metrics are very seductive,” she says. “They lead us toward things that can be counted, that happen on a regular basis.” The result may be a steady flow of data that obscures structural risk or emerging weaknesses, Nather warns.
Metrics also influence behavior across the organization. In phishing programs, Nather favors measures that reinforce reporting rather than punish error. “You want to incentivize the reporting, and you want to praise people for doing it,” Nather says, emphasizing that what boards choose to measure ultimately shapes how the organization behaves.
George Tsantes, partner at business advisory firm Newport, highlights the burden of proving a security program’s effectiveness. “I think it’s shocking when I talk to different boards or different companies and discover how much time they spend proving themselves instead of actually doing things,” he tells CSO.
This dynamic is especially pronounced in regulated environments, where assurance work consumes resources that might otherwise be directed toward risk reduction. Regulatory scrutiny can also reorder priorities. “Regulators may focus on an item that was 20th on your list, but if they write you up, now it becomes No. 1,” Tsantes says. Boards, he argues, need visibility into those tradeoffs. A mature program reduces the proving burden wherever possible so that security effort is directed toward reducing risk rather than generating documentation.
How AI is stress testing board-level cyber metrics
Despite reshaping many aspects of cybersecurity operations, the rapid adoption of artificial intelligence has not yet produced a distinct set of board-level security metrics. Instead, AI is exposing long-standing weaknesses in how organizations translate security activity into risk signals directors can act on.
Boards are not yet asking for AI-specific dashboards, experts say. What they are asking, often implicitly, is whether AI is increasing exposure, weakening controls, or altering the organization’s ability to limit damage when things go wrong.
“I don’t think we have any output-based metrics yet,” says Corelight’s Bejtlich. Before organizations can measure AI risk, he argues, they must first establish basic governance signals: where AI is in use, how widely it is deployed, and whether it is expanding the attack surface or reducing operational burden.
That visibility gap is already a concern for many security leaders. “When I talk to CISOs, their biggest concern is that they can’t always see what AI is being used inside of their enterprise,” says EPSD’s Nather. Without that awareness, boards are left with activity metrics that obscure the more fundamental question of whether the organization understands the risks it has introduced.
For Bernard Brantley, CISO at Corelight, AI does not warrant a new measurement framework so much as stricter discipline around existing ones. “I don’t think that they should differ from your standard metrics,” he tells CSO. In practice, AI amplifies familiar security challenges — initial access, lateral movement, and data exfiltration — by increasing their scale and speed.
That amplification changes what board-level metrics must signal. Expanded AI usage can increase coverage requirements, stretching teams and controls. At the same time, AI-driven automation can compress response timelines.
“We were able to reduce MTTR [mean time to remediation] for this portion of our coverage by 60% because we threw an agent at it,” Brantley says. The governance signal for boards is not the presence of AI itself, but how it shifts risk concentration, response capacity, and resource tradeoffs.
For Newport’s Tsantes, AI oversight is a test of enforcement rather than measurement. “What the board needs to know is that there are good uses of AI and bad uses of AI,” he says. But visibility without consequence is not governance. “Even knowing where the AI agents might be within your assets is difficult,” Tsantes adds. “If you can’t fire somebody for using the wrong AI, then you really don’t have any teeth in that policy.”
No Responses