The security economy revolves around the assumption that security operations centres (SOCs) will save organizations. Spend enough, outsource enough, automate enough, and you’ll be fine. Except you’re not. Breaches keep happening, and more often than not, they slip past the SOC in plain sight.
In my previous piece — 7 reasons the SOC is in crisis — and 5 steps to fix it — I laid out the structural challenges holding SOCs back.
This time, I want to focus on the heart of the issue. The problem isn’t just alert fatigue or the technology stack. It’s how we think, design, and engineer for resilience.
The gap isn’t just alert fatigue, SOCs must evolve
I’ll give you a trivial example. I once waited 25 minutes in a five-star hotel just to get an orange juice and soda. Not because the staff weren’t capable, but because the process was broken. Switching gift cards, missing limes, wrong glasses. All that friction meant I, the customer, had to drive the outcome.
That’s what it’s like in too many SOCs. With endless tools, dashboards, and alerts, the process breaks down. The result is missed signs, poor correlation, and defenders forced to drive outcomes manually.
The gap isn’t just a matter of alert fatigue. It’s the entire ecosystem around it.
SOCs are reactive by design: detect and respond. But staying only in that lane keeps them permanently behind.
What’s needed is a higher design threshold. Let’s call it “Swiss engineering”. The Swiss railways don’t work because they’re flashy. They work because they’re over-specified, tested, and rehearsed.
Most SOCs aren’t. They’re either outsourced, insourced, or co-sourced, but they’re chronically under-specified.
We throw more tools at detection problems instead of engineering robust foundations. We overlay complexity rather than building elegant simplicity.
Complexity is the enemy of resilience
I recently had a fascinating conversation with a friend in Cambridge. We were debating what’s wrong with cybersecurity, and he said something that stuck with me: “The answer is simple if it’s done very well.”
It echoes a point I explored in a collaborative essay with Abbas Kudrati: Cyber Security Needs a Simpler, Smarter Mindset.
Our industry keeps layering more tools and processes, patching over weaknesses instead of fixing root causes like misconfigurations, buying another shiny tool instead of rehearsing our response cadence.
Hackers know this. They don’t work like corporations. They don’t clock off for holidays or take bank holiday weekends. They probe during handovers, exploit downtime, and use what’s already in the environment. They live off the land, slipping through blind spots because they know most SOCs aren’t engineered to catch them.
You cannot outsource thinking
The first sign of compromise is rarely loud. It’s subtle. That’s why context and intuition matter so much in security. Yet too often we expect juniors to spot what even seasoned analysts struggle with. Then we tell ourselves AI security will fill the gap.
Don’t get me wrong, AI has a place. But we’re using it as a proxy for human thinking rather than enhancing human capability. That’s backwards. AI works well with structured data, but it needs humans to provide context, priority, and that sense of “something’s not right here”.
You cannot outsource thinking. You can collaborate to drive outcomes, but the fundamental understanding of your environment must be internal.
Vendors can provide tools and perspective, but they cannot tell you what matters most in your business. That knowledge comes from rehearsing scenarios, mapping your systems, and agreeing on risk tolerances. Which means the hard part is unavoidable: think it through carefully.
Too often, organisations look to a third party for ready-made answers. What they need instead is to own the process of defining priorities and thresholds, then bring in partners to help validate and strengthen that work.
Where to go from here
The solution isn’t more automation or better tools. It’s careful thinking, threat modelling, and understanding your environment. It’s moving from reactive detection to anticipatory defense. It’s building behavioural analytics that give us context, not just alerts.
And we need urgency. I keep coming back to aviation because it gets this right. When a pilot loses an engine, they don’t pull out a manual. They have rehearsed it. Straight away, they follow a flow that stabilises the situation: aviate, navigate, communicate. SOCs need the same discipline: speed, intuition, and engineering that doesn’t crumble under pressure.
Having an incident in itself is not the danger. The danger is response failure: not seeing it, not knowing what to do. That’s where things go wrong very fast.
The answer is to build muscle memory. Just as pilots drill emergency flows, SOC teams must rehearse until containment and escalation are instinctive. That means establishing clear authority for action, stress-testing decision-making with live-fire exercises, and planning around the reality that attackers exploit handovers, holidays, and downtime.
The truth is that most breaches don’t need to happen. They occur because we’ve normalized complexity, under-specified our defences, and forgotten that elegance comes from doing the simple things well.
The SOC is supposed to be the parachute. Right now, too many would fail the moment we pull the ripcord. The question is: are you confident yours will open when it matters?
No Responses