CISOs grapple with the realities of applying AI to security functions

Tags:

Applying artificial intelligence to strengthen cybersecurity defenses — partially propelled by industry hype — has quickly risen to the top of the agenda for many enterprise security professionals.

AI offers speed, scalability, and adaptability that traditional security tools alone cannot match in the security operations center, scam email blocking, access management, and many other core security functions. Emerging use cases for AI are beginning to reshape CISOs’ thinking around cybersecurity operations as they seek to harness the technology to better defend their organizations against escalating threats.

CSO took stock of progress on the ground by speaking to several CISOs and security consultants, whose early experience offer lessons for their peers on the practicalities of implementing AI — and a more unvarnished truth on the results to expect.

Turbo boost telemetry

Security AI and automation are beginning to demonstrate significant value, especially in minimizing dwell time and accelerating triage and containment processes, says Myke Lyons, CISO at telemetry and observability pipeline software vendor Cribl.

Their success, however, depends heavily on the prioritization and accuracy of the underlying telemetry, Lyons cautions.

“Within my team, we follow a structured approach to data management: High-priority, time-sensitive telemetry — such as identity, authentication, and key application logs — is directed to high-assurance systems for real-time detection,” Lyons explains. “Meanwhile, less critical data is stored in data lakes to optimize costs while retaining forensic value.”

Lyons continues: “This strategy not only improves the signal-to-noise ratio for analysts but also shortens response times and mitigates the impact of incidents, ultimately leading to tangible cost savings.”

The agentic edge

The financial services is often an early adopter of cutting-edge security technologies.

Erin Rogers, SVP and director of cybersecurity risk and compliance at BOK Financial, tells CSO that AI-based upgrades are helping threat detection and response systems to autonomously analyze threats, make real-time decisions, and adapt responses, significantly improving early detection and mitigation.

While automation has helped reduce breach identification and response times through rule-based security orchestration, automation and response (SOAR) and endpoint detection and response (EDR) tools, agentic AI technologies offer the potential to turbo boost performance and results.

“For example, at BOK Financial we’ve deployed solutions that use agentic AI to identify and block business email compromise attempts in real-time, continually improving accuracy with evolving threats,” Rogers says.

A word of warning

Other experts were cautious about whether these early success stories could be replicated across multiple industries.

“We’re not seeing AI detection quite there yet, and it’s dangerous because we could be getting lulled into a false sense of security,” says IEEE senior member Shaila Rana.

Recent research, cited by Rana, showed that AI systems can correctly flag 89% of malicious files as malware. However, under the toughest test conditions a prototype autonomous AI-based system caught only 26% of all actual malware.

These figures come from experiments by Microsoft Research on Project Ire, an AI-based malware classification prototype, in a demanding test on 4,000 files not classified by automated systems and slated for manual review.

“We have to be aware of this issue, and many organizations are discovering that automation without proper integration just creates faster chaos rather than faster resolution,” Rana says.

Rana concludes: “The real ‘win’ isn’t just speed here; it’s in handling the routine stuff so human experts can focus on the complex and strategic problem solving that machines still can’t match.”

AI needs to be treated as a security copilot — and not an auto-pilot

Anar Israfilov, founder & CTO at AI threat detection specialist Cyberoon Enterprise, says his firm’s work with enterprise clients has illustrated the value of human oversight and the need to “reality check” AI outputs.

“In one of our projects, anomaly detection started to yield various ‘ghost alerts’ because data sources were not set appropriately,” Israfilov explains. “And all of a sudden, analysts were chasing down noise again.”

Israfilov adds: “That was an important learning point: you absolutely need governance, and human oversight, right from the start. We were required to build explainability tools and feedback loops in order for the system to learn and the analyst to trust it.”

AI is best as a copilot — and not a replacement — for security analysts, he concludes.

“The companies being proactive about treating AI as an assistant for their analysts — instead of automating the analyst away — are seeing much better results,” Israfilov says.

Context sensitive

Denida Grow, managing partner at boutique security consultancy LeMareschal, says her experiments suggest AI-based security tooling is still in its early stages of development.

“At this point, we cannot base security operations solely on AI-generated reports or recommendations,” Grow says. “They can help with things like summarizing incident logs, pulling patterns from data, or speeding up report drafting, but when it comes to supporting actual security operations, they’re still too immature to be relied on without a human involved.”

A frequent lack of context is the most significant shortcoming of nascent AI-based security tools, according to Grow.

“For example, in threat intelligence, they’ll surface generic insights but overlook critical regional or industry-specific details,” Grow explains. “In incident response, they can draft a playbook suggestion, but it may not align with real-world variables like staffing, local laws, or the client’s risk tolerance.”

AI-based security tools serve as a useful means to get a different perspective on a problem but are no replacement for professional judgment. “Every output still needs review, correction, and context from an experienced security professional,” Grow advises.

AI + institutional knowledge = winning

Jonathan Garini, CEO at enterprise AI platform fifthelement.ai, argues that AI is best used to achieve incremental improvements in enterprise security operations centers.

“Rather than trying to revolutionize the SOC, we see many companies focusing on the volume of repetitive, low-level tasks like log analysis and alert triage,” Garini tells CSO. “Through AI in this situation, security teams can reduce the time spent on false positives and enable analysts to focus on more valuable investigations.”

Another key lesson is the need to integrate AI with institutional knowledge.

“Several CISOs I’ve spoken with emphasize that success in this area isn’t a matter of feeding raw data to an AI model, but of layering in context such as threat intelligence feeds and past incident reports, as well as organizational workflows,” Garini concludes.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *