5 ways CISOs are experimenting with AI

Tags:

Security leaders face a dual mandate with AI — guide secure organizational adoption while seeking ways to improve security operations. Things are moving quickly, yet cybersecurity teams are taking a cautious approach, according to ISC2’s AI Adoption Survey, with 30% already integrating AI tools into their operations and 43% still evaluating and testing.

So how exactly are they embracing and experimenting with AI? Several security leaders shared their experiences incorporating AI into their daily operations, from executive security reports and risk analysis to threat hunting and SOC operations.

Custom AI applications with MCP servers

Model context protocol (MCP) is an open standard introduced by Anthropic to connect AI systems like LLMs to data sources. George Gerchow, CSO at Bedrock Security, has been experimenting with MCP and believes it offers exciting potential to improve security operations. “I’m using Anthropic’s Claude with MCP. Their desktop app has built-in native MCP support making it very easy to connect to internal tools and data sources. MCP provides a structured interface between AI and security tools that standardizes data access, reducing brittle integrations and helping security teams build trust in automation as capabilities mature,” says Gerchow.

He has developed an MCP server and tools, including a custom AI-powered DLP system that comprehends data in context and logs sensitive interactions as it happens. “I’m applying these protocols to address real business challenges, such as automatically classifying sensitive data, streamlining policy enforcement and laying the foundation for secure, agentic AI workflows,” Gerchow says.

He says these are live experiments inside his environment, reshaping how the team thinks about trust, access and decision-making in real-time. Gerchaw learnt that MCP provides context such as asset sensitivity and regulatory impact directly into security workflows, helping with prioritization and risk-aware decisions. “It also facilitates natural-language investigation, allowing AI agents to answer complex questions by pulling structured data in real time.”

Meanwhile, Agero CISO/CIO Bob Sullivan and his engineering team have been experimenting with Gemini to build a ‘Gem’ or custom agent, as a threat modeling tool and they’re seeing promising results.

Using the Gem, they identify a new technology to be deployed, the security tool that will go with it and anything else that’s relevant. “You might tell it ‘we’re putting in AWS Connect in our contact center and we’re going to use Okta for identity services and we’re going to add ‘this or that’,” Sullivan says. “It can pop out a STRIKE threat model that’s about 95% there and when you look at the effort it would take an engineering and architecture staff to do that work, you’re talking about 12 hours of work.”

Sullivan calculates the Gem eliminated at least 75% of the effort and wants to find other experiments to gauge its effectiveness. “We’ve been thinking about playing around with agentic AI and putting it into a sandbox as a highly controlled sandbox area, rebuilding our environment and trying to teach it to fix security vulnerabilities,” he says.

Their approach is to trial AI tools in controlled environments and experiment cautiously. While the Gem threat modelling tool requires strong guardrails, he’s pleased with the results and is looking at the potential for agents to handle vulnerabilities at scale. “Could it come up with solutions and we’d then be able to automate a mass amount of vulnerabilities in the production environment? We see huge potential opportunities to make things better,” he says.

Translating security metrics into business language

CISOs are now tasked with being the security storyteller — and it doesn’t always come easily. Turning to AI, CISOs are finding a helping hand to translate technical detail into business-oriented narratives, drawing on a range of data sources, risk trends, control gaps and threat modeling.

AI tools are helping tailor messaging and reporting for key stakeholders such as the CEO, CFO and the board to emphasize relevant information, whether it’s budgetary needs or the value of new technical initiatives, to suit finance or business-minded audience. In other cases, executive summaries from technical SOC findings and summarizing key discussion points from meetings without having to replay an entire meeting recording.

Headspace CISO Jameeka Aaron has used AI tools to strengthen the cybersecurity story within the organization. In her experience, success — and budget — is typically measured by an absence of events rather than risk reduction. “We have to get good at storytelling. That’s one of the things I’m excited about with AI, because it’s allowing us to tell the story of security in a way that people can really understand and identify with the technology, so we’re able to get those resources,” she says.

Aaron adapted her quarterly review using Notebook LM. She was able to take her report and pull in data feeds from different parts of the organization and any other relevant news stories or other information and turn it into a podcast-style report. “It took 100 slides and turned it into six minutes of storytelling,” she tells CSO.

It was shared within the company and the feedback was positive, with people finding it helped explain things such as vulnerability management and why security needs collaboration with other departments. “It’s helping to tie our technology work to the business needs and helping each person in the business understand how that works with them,” she says.

Lavy Stokhamer, global head of cybersecurity operations at Standard Chartered, says the firm views AI as a strategic enabler, not just as a defensive measure and mentions threat and anomaly detection, reducing false positives and improving alerts as areas it is being applied.

Like Aaron, he’s finding it also has valuable applications in bridging the gap between what he calls “executive to operational threat intelligence” to transforming cybersecurity news and media signals into validated, actionable intelligence that drives real-time decision-making.

“Senior leaders are constantly exposed to headlines about new vulnerabilities, breaches at peer organizations, or novel attack techniques, and the immediate question is always: ‘Could this affect us?’”

“We’ve built a capability that ingests these external signals and automatically cross-references them with our internal threat landscape and telemetry. The output provides a clear risk assessment and, where needed, recommends remediation steps, closing the loop from strategic awareness to tactical action,” Stokhamer says.

AI-assisted threat hunting

Just as it helps clarify security reports, generative AI’s plain-language function is proving effective in easing threat detection.

Flexera CIO and CISO, Conal Gallagher is cautiously assessing how AI can help identify potential threats more efficiently, although he stresses it’s not a replacement for skilled analysts. “This not only accelerates investigations but also reduces analyst fatigue, especially during high-alert situations,” he says.

Gallagher used Microsoft Copilot Studio to build an agent using the GPT-4o model. The AI agent uses an MCP server to integrate with their security data and translate plain-language prompts from analysts into complex detection queries. “Instead of manually writing complex threat detection queries, analysts can simply describe what they need using plain-language prompts, such as ‘show me failed login attempts from foreign IPs’ or ‘show recent login attempts from suspicious IPs’,” he says.

The AI translates these into queries, interacts with the underlying systems, retrieves the relevant data and delivers the results quickly. Gallagher has found there are areas it suits more than others. “So far, the most useful has been fast triage of emerging threats, targeted hunts during an incident, or ad-hoc investigations where time is critical. And the least useful has been very niche hunts that require deep tool-specific knowledge or advanced correlation beyond what the AI model has been trained on.”

Gallagher is hopeful that, with continued testing and improved capabilities, it has the potential to expand into automating continuous threat hunting with live threat intelligence feeds, integrating with multiple platforms beyond Microsoft security tools, and linking detection to automated response actions.

Before wider adoption, however, he’d need to see more improvements in AI’s ability to understand and adapt to evolving threat landscapes, as well as improvements in the accuracy and reliability of its automated responses. “And robust testing and validation frameworks must be developed to ensure AI’s decisions are trustworthy and actionable,” he says.

Cribl’s cybersecurity team is using agentic AI to investigate phishing emails that have been flagged and growing this into autonomous threat hunting. “When an employee marks an email as suspicious, the system performs a deeper analysis, examining headers, content, and attachments such as PDFs and QR codes,” Myke Lyons, Cribl CISO, says.

If the AI confirms a phishing attempt, it autonomously searches for similar messages across the entire message store, enabling broader threat hunting. “This process mimics what a human analyst would do but is much faster, reducing the time from tens of minutes to near real-time,” Lyons says.

The AI performs nearly as well as a tier one analyst for phishing cases, with human analysts focusing on more sophisticated tasks. “With agentic AI, when you break down a process, like a phishing attack, and allow these agents to operate uniquely, they can run the scrutiny and do it at speed,” he says.

Internal and vendor risk assessment

AI is also easing security risk assessment processes, finds Centric Consulting director of security services Brandyn Fisher, who has employed the technology to help prioritize security risks.

“We know that likelihood and impact are the usual suspects when it comes to risk prioritization, but I wanted to dig deeper and look for projects that would not only reduce our overall risk but also knock out as many individual risk items as possible,” he tells CSO.

He inputs the risk register into AI and asks for the five projects that would cut down business risk the most while addressing the highest number of items on this list. “The results came back with five initiatives that ended up touching about 20% of all our identified risks in some way, which gave us a solid foundation for building project plans and putting together funding requests,” he says.

In another case, he’s used AI to measure the performance of the cybersecurity program, inputting the framework and requesting metrics based on the controls in each category. It offered a comprehensive set of metrics for a cyber program, but he wanted to go further and asked for guidance on which ones would resonate with executives and board members — that tell the story without getting lost in technical weeds.

The results affirmed his choices and went further and identified additional ones to consider such as time to remediate configuration drift, time to onboard a new asset’s logs to the SIEM system, percentage of service providers with a signed security addendum, and time to update the incident response plan after an exercise or real incident. “While I’m still the one monitoring all the detailed KPIs day-to-day, I now have this refined set of metrics that helps communicate where we stand and how we’re progressing to the people who need that bird’s-eye view,” he says.

Vendor risk management is another time-consuming but important security task. Looking to ways to improve his processes, Cribl’s Lyons has set AI to work and found the results impressive.

With thousands of customers, many in heavily regulated industries, Cribl must process an enormous number of questionnaires on an annual basis containing thousands of questions to be completed.

He and his team set AI to their own knowledge base to fill out the questionnaires and it’s now at a fairly high degree of accuracy with responses.

Lyons believes it has extended the team by the equivalent of one or two people in that department making it easier for them to focus on higher quality documentation and knowledge management. They’ve built a feedback loop to ensure the answers are correct and remain up to date. “We keep that knowledge crisp, accurate, we can identify who the owners are, so we’ll continue to leverage this capability,” he says.

Tier 1 SOC investigations

Lyons and the team are also exploring tier 1 SOC investigations using a combination of AI SOC tooling and internally developed capabilities.

Typically, the team of analysts must translate deeply technical information such as IP address and hash value into a narrative more like “human technical speak” but they’re finding AI SOC tools highly effective. “It’s taking that whole story and writing it up in very plain language, and even our analysts are not necessarily needing that additional level of detail,” he says.

AI is helping with the underlying correlation of events, such as log-in attempts, VPN usage, location, IP address and user agent, and utilizing natural language to present that information.“For most of my career, I’ve found alerts written by the detection engineer are often very complicated and hard to ascertain [the meaning] until you’ve created your own filter for suppressing certain parts, highlighting other parts and going back and looking at various attributes,” he says.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *