{"id":4759,"date":"2025-09-09T07:00:00","date_gmt":"2025-09-09T07:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=4759"},"modified":"2025-09-09T07:00:00","modified_gmt":"2025-09-09T07:00:00","slug":"5-ways-cisos-are-experimenting-with-ai-2","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=4759","title":{"rendered":"5 ways CISOs are experimenting with AI"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Security leaders face a dual mandate with AI \u2014 guide secure organizational adoption while seeking ways to improve security operations. Things are moving quickly, yet cybersecurity teams are taking a cautious approach, according to <a href=\"\/\/www.isc2.org\/Insights\/2025\/07\/2025-isc2-ai-pulse-survey\" target=\"_blank\" rel=\"noopener\">ISC2\u2019s AI Adoption Survey<\/a>, with 30% already integrating AI tools into their operations and 43% still evaluating and testing.<\/p>\n<p>So how exactly are they embracing and experimenting with AI? Several security leaders shared their experiences incorporating AI into their daily operations, from executive security reports and risk analysis to threat hunting and SOC operations.<\/p>\n<h2 class=\"wp-block-heading\">Custom AI applications with MCP servers<\/h2>\n<p><a href=\"https:\/\/www.csoonline.com\/article\/4015222\/mcp-uses-and-risks.html\">Model context protocol<\/a> (MCP) is an open standard introduced by Anthropic to connect AI systems like LLMs to data sources. George Gerchow, CSO at Bedrock Data, has been experimenting with MCP and believes it offers exciting potential to improve security operations. \u201cI\u2019m using Anthropic\u2019s Claude with MCP. Their desktop app has built-in native MCP support making it very easy to connect to internal tools and data sources. MCP provides a structured interface between AI and security tools that standardizes data access, reducing brittle integrations and helping security teams build trust in automation as capabilities mature,\u201d says Gerchow.<\/p>\n<p>He has developed an MCP server and tools, including a custom AI-powered DLP system that comprehends data in context and logs sensitive interactions as it happens. \u201cI\u2019m applying these protocols to address real business challenges, such as automatically classifying sensitive data, streamlining policy enforcement and laying the foundation for secure, agentic AI workflows,\u201d Gerchow says.<\/p>\n<p>He says these are live experiments inside his environment, reshaping how the team thinks about trust, access and decision-making in real-time. Gerchow learnt that MCP provides context such as asset sensitivity and regulatory impact directly into security workflows, helping with prioritization and risk-aware decisions. \u201cIt also facilitates natural-language investigation, allowing AI agents to answer complex questions by pulling structured data in real time.\u201d<\/p>\n<p>Meanwhile, Agero CISO\/CIO Bob Sullivan and his engineering team have been experimenting with Gemini to build a \u2018Gem\u2019 or custom agent, as a threat modeling tool and they\u2019re seeing promising results.<\/p>\n<p>Using the Gem, they identify a new technology to be deployed, the security tool that will go with it and anything else that\u2019s relevant. \u201cYou might tell it \u2018we\u2019re putting in AWS Connect in our contact center and we\u2019re going to use Okta for identity services and we\u2019re going to add \u2018this or that\u2019,\u201d Sullivan says. \u201cIt can pop out a STRIKE threat model that\u2019s about 95% there and when you look at the effort it would take an engineering and architecture staff to do that work, you\u2019re talking about 12 hours of work.\u201d<\/p>\n<p>Sullivan calculates the Gem eliminated at least 75% of the effort and wants to find other experiments to gauge its effectiveness. \u201cWe\u2019ve been thinking about playing around with agentic AI and putting it into a sandbox as a highly controlled sandbox area, rebuilding our environment and trying to teach it to fix security vulnerabilities,\u201d he says.<\/p>\n<p>Their approach is to trial AI tools in controlled environments and experiment cautiously. While the Gem threat modelling tool requires strong guardrails, he\u2019s pleased with the results and is looking at the potential for agents to handle vulnerabilities at scale. \u201cCould it come up with solutions and we\u2019d then be able to automate a mass amount of vulnerabilities in the production environment? We see huge potential opportunities to make things better,\u201d he says.<\/p>\n<h2 class=\"wp-block-heading\">Translating security metrics into business language<\/h2>\n<p>CISOs are now tasked with being the security storyteller \u2014 and it doesn\u2019t always come easily. Turning to AI, CISOs are finding a helping hand to translate technical detail into business-oriented narratives, drawing on a range of data sources, risk trends, control gaps and threat modeling.<\/p>\n<p>AI tools are helping tailor messaging and reporting for key stakeholders such as the CEO, CFO and the board to emphasize relevant information, whether it\u2019s budgetary needs or the value of new technical initiatives, to suit finance or business-minded audience. In other cases, executive summaries from technical SOC findings and summarizing key discussion points from meetings without having to replay an entire meeting recording.<\/p>\n<p>Headspace CISO Jameeka Aaron has used AI tools to strengthen the cybersecurity story within the organization. In her experience, success \u2014 and budget \u2014 is typically measured by an absence of events rather than risk reduction. \u201cWe have to get good at storytelling. That\u2019s one of the things I\u2019m excited about with AI, because it\u2019s allowing us to tell the story of security in a way that people can really understand and identify with the technology, so we\u2019re able to get those resources,\u201d she says.<\/p>\n<p>Aaron adapted her quarterly review using Notebook LM. She was able to take her report and pull in data feeds from different parts of the organization and any other relevant news stories or other information and turn it into a podcast-style report. \u201cIt took 100 slides and turned it into six minutes of storytelling,\u201d she tells CSO.<\/p>\n<p>It was shared within the company and the feedback was positive, with people finding it helped explain things such as vulnerability management and why security needs collaboration with other departments. \u201cIt\u2019s helping to tie our technology work to the business needs and helping each person in the business understand how that works with them,\u201d she says.<\/p>\n<p>Lavy Stokhamer, global head of cybersecurity operations at Standard Chartered, says the firm views AI as a strategic enabler, not just as a defensive measure and mentions threat and anomaly detection, reducing false positives and improving alerts as areas it is being applied.<\/p>\n<p>Like Aaron, he\u2019s finding it also has valuable applications in bridging the gap between what he calls \u201cexecutive to operational threat intelligence\u201d to transforming cybersecurity news and media signals into validated, actionable intelligence that drives real-time decision-making.<\/p>\n<p>\u201cSenior leaders are constantly exposed to headlines about new vulnerabilities, breaches at peer organizations, or novel attack techniques, and the immediate question is always: \u2018Could this affect us?\u2019\u201d<\/p>\n<p>\u201cWe\u2019ve built a capability that ingests these external signals and automatically cross-references them with our internal threat landscape and telemetry. The output provides a clear risk assessment and, where needed, recommends remediation steps, closing the loop from strategic awareness to tactical action,\u201d Stokhamer says.<\/p>\n<h2 class=\"wp-block-heading\">AI-assisted threat hunting<\/h2>\n<p>Just as it helps clarify security reports, generative AI\u2019s plain-language function is proving effective in easing threat detection.<\/p>\n<p>Flexera CIO and CISO, Conal Gallagher is cautiously assessing how AI can help identify potential threats more efficiently, although he stresses it\u2019s not a replacement for skilled analysts. \u201cThis not only accelerates investigations but also reduces analyst fatigue, especially during high-alert situations,\u201d he says.<\/p>\n<p>Gallagher used Microsoft Copilot Studio to build an agent using the GPT-4o model. The AI agent uses an MCP server to integrate with their security data and translate plain-language prompts from analysts into complex detection queries. \u201cInstead of manually writing complex threat detection queries, analysts can simply describe what they need using plain-language prompts, such as \u2018show me failed login attempts from foreign IPs\u2019 or \u2018show recent login attempts from suspicious IPs\u2019,\u201d he says.<\/p>\n<p>The AI translates these into queries, interacts with the underlying systems, retrieves the relevant data and delivers the results quickly. Gallagher has found there are areas it suits more than others. \u201cSo far, the most useful has been fast triage of emerging threats, targeted hunts during an incident, or ad-hoc investigations where time is critical. And the least useful has been very niche hunts that require deep tool-specific knowledge or advanced correlation beyond what the AI model has been trained on.\u201d<\/p>\n<p>Gallagher is hopeful that, with continued testing and improved capabilities, it has the potential to expand into automating continuous threat hunting with live threat intelligence feeds, integrating with multiple platforms beyond Microsoft security tools, and linking detection to automated response actions.<\/p>\n<p>Before wider adoption, however, he\u2019d need to see more improvements in AI\u2019s ability to understand and adapt to evolving threat landscapes, as well as improvements in the accuracy and reliability of its automated responses. \u201cAnd robust testing and validation frameworks must be developed to ensure AI\u2019s decisions are trustworthy and actionable,\u201d he says.<\/p>\n<p>Cribl\u2019s cybersecurity team is using agentic AI to investigate phishing emails that have been flagged and growing this into autonomous threat hunting. \u201cWhen an employee marks an email as suspicious, the system performs a deeper analysis, examining headers, content, and attachments such as PDFs and QR codes,\u201d Myke Lyons, Cribl CISO, says.<\/p>\n<p>If the AI confirms a phishing attempt, it autonomously searches for similar messages across the entire message store, enabling broader threat hunting. \u201cThis process mimics what a human analyst would do but is much faster, reducing the time from tens of minutes to near real-time,\u201d Lyons says.<\/p>\n<p>The AI performs nearly as well as a tier one analyst for phishing cases, with human analysts focusing on more sophisticated tasks. \u201cWith agentic AI, when you break down a process, like a phishing attack, and allow these agents to operate uniquely, they can run the scrutiny and do it at speed,\u201d he says.<\/p>\n<h2 class=\"wp-block-heading\">Internal and vendor risk assessment<\/h2>\n<p>AI is also easing security risk assessment processes, finds Centric Consulting director of security services Brandyn Fisher, who has employed the technology to help prioritize security risks.<\/p>\n<p>\u201cWe know that likelihood and impact are the usual suspects when it comes to risk prioritization, but I wanted to dig deeper and look for projects that would not only reduce our overall risk but also knock out as many individual risk items as possible,\u201d he tells CSO.<\/p>\n<p>He inputs the risk register into AI and asks for the five projects that would cut down business risk the most while addressing the highest number of items on this list. \u201cThe results came back with five initiatives that ended up touching about 20% of all our identified risks in some way, which gave us a solid foundation for building project plans and putting together funding requests,\u201d he says.<\/p>\n<p>In another case, he\u2019s used AI to measure the performance of the cybersecurity program, inputting the framework and requesting metrics based on the controls in each category. It offered a comprehensive set of metrics for a cyber program, but he wanted to go further and asked for guidance on which ones would resonate with executives and board members \u2014 that tell the story without getting lost in technical weeds.<\/p>\n<p>The results affirmed his choices and went further and identified additional ones to consider such as time to remediate configuration drift, time to onboard a new asset\u2019s logs to the SIEM system, percentage of service providers with a signed security addendum, and time to update the incident response plan after an exercise or real incident. \u201cWhile I\u2019m still the one monitoring all the detailed KPIs day-to-day, I now have this refined set of metrics that helps communicate where we stand and how we\u2019re progressing to the people who need that bird\u2019s-eye view,\u201d he says.<\/p>\n<p>Vendor risk management is another time-consuming but important security task. Looking to ways to improve his processes, Cribl\u2019s Lyons has set AI to work and found the results impressive.<\/p>\n<p>With thousands of customers, many in heavily regulated industries, Cribl must process an enormous number of questionnaires on an annual basis containing thousands of questions to be completed.<\/p>\n<p>He and his team set AI to their own knowledge base to fill out the questionnaires and it\u2019s now at a fairly high degree of accuracy with responses.<\/p>\n<p>Lyons believes it has extended the team by the equivalent of one or two people in that department making it easier for them to focus on higher quality documentation and knowledge management. They\u2019ve built a feedback loop to ensure the answers are correct and remain up to date. \u201cWe keep that knowledge crisp, accurate, we can identify who the owners are, so we\u2019ll continue to leverage this capability,\u201d he says.<\/p>\n<h2 class=\"wp-block-heading\">Tier 1 SOC investigations<\/h2>\n<p>Lyons and the team are also exploring tier 1 SOC investigations using a combination of AI SOC tooling and internally developed capabilities.<\/p>\n<p>Typically, the team of analysts must translate deeply technical information such as IP address and hash value into a narrative more like \u201chuman technical speak\u201d but they\u2019re finding AI SOC tools highly effective. \u201cIt\u2019s taking that whole story and writing it up in very plain language, and even our analysts are not necessarily needing that additional level of detail,\u201d he says.<\/p>\n<p>AI is helping with the underlying correlation of events, such as log-in attempts, VPN usage, location, IP address and user agent, and utilizing natural language to present that information.\u201cFor most of my career, I\u2019ve found alerts written by the detection engineer are often very complicated and hard to ascertain [the meaning] until you\u2019ve created your own filter for suppressing certain parts, highlighting other parts and going back and looking at various attributes,\u201d he says.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Security leaders face a dual mandate with AI \u2014 guide secure organizational adoption while seeking ways to improve security operations. Things are moving quickly, yet cybersecurity teams are taking a cautious approach, according to ISC2\u2019s AI Adoption Survey, with 30% already integrating AI tools into their operations and 43% still evaluating and testing. So how [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":4739,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-4759","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/4759"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4759"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/4759\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/4739"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4759"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4759"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4759"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}