{"id":3679,"date":"2025-06-25T07:00:00","date_gmt":"2025-06-25T07:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=3679"},"modified":"2025-06-25T07:00:00","modified_gmt":"2025-06-25T07:00:00","slug":"llms-hype-versus-reality-what-cisos-should-focus-on","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=3679","title":{"rendered":"LLMs hype versus reality: What CISOs should focus on"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>From risks in AI applications such as poisoned training data and hallucinations, to AI-enabled security, to deep fakes, user error, and novel AI-generated attack techniques, the cybersecurity industry is abuzz with dire security threats that are overwhelming CISOs.<\/p>\n<p>For example, during and after the RSA Conference in April 2025, attendees posted vociferously about the overload of AI fear, uncertainty, and doubt (FUD), particularly on the part of vendors.<\/p>\n<p>One of them is Netflix staff information risk engineer Tony Martin-Vegue who, in a post-RSAC interview, tells CSO that there\u2019s no stopping this AI train, but there are ways to cut through hype and apply basic controls where they matter most.<\/p>\n<p>First, he says, focus on why the organization deploys AI. \u201cThe way I see it is there is a risk of <em>not<\/em> using AI even though there is a lot of over-hype and promise about its capability. That said, organizations that don\u2019t use AI will get left behind. The risk of <em>using<\/em> AI is where all the FUD is.\u201d<\/p>\n<p>In terms of applying controls, rinse, wash, and repeat the processes you followed when adopting cloud, BYOD, and other powerful technologies, he says. Start with knowing where and how AI is used, by whom and for what purpose. Then, focus on securing the data employees are sharing with the tools.<\/p>\n<h2 class=\"wp-block-heading\">Get to know your AI<\/h2>\n<p>\u201cAI is a fundamental change that is going to permeate society in a way that might even eclipse the internet. But this change is happening at such a rapid rate that the ability to distinguish the blur effect is hard to comprehend for a lot of people,\u201d explains Rob T. Lee, chief of research, AI and emerging threats, at SANS Institute. \u201cNow, every single part of the organization is going to be utilizing AI in different forms. You need a way to reduce risk fundamentally for implementation. And that means seeing where people use it, and under what business use cases, across the organization.\u201d<\/p>\n<p>Lee, who\u2019s helping SANS develop a community-consensus <a href=\"https:\/\/www.sans.org\/mlp\/critical-ai-security-guidelines\/\" target=\"_blank\" rel=\"noopener\">AI security guidelines<\/a> checklist, spends 30 minutes a day using advanced AI agents for various business purposes and encourages other cyber security and executive leaders to do the same.<em> <\/em>Because once they get familiar with the programs and their capabilities, then they can get down to selecting controls.<\/p>\n<p>As an example, Lee points to Moderna, which announced in May 2025 that it merged human resources and IT under a new role, <a href=\"https:\/\/www.wsj.com\/articles\/why-moderna-merged-its-tech-and-hr-departments-95318c2a\" target=\"_blank\" rel=\"noopener\">chief people and digital technology officer<\/a>. \u201cThe work is no longer just about humans, but about managing both humans and AI agents,\u201d Lee explains. \u201cThis requires HR and IT to collaborate in new ways.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Revisit security fundamentals<\/h2>\n<p>That\u2019s not to say that because AI is so new, current security fundamentals don\u2019t count. They most certainly do.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/christopher-hetner-7969758\/\">Chris Hetner<\/a>, senior cyber risk advisor at the National Association of Corporate Directors (NACD), explains: \u201cThe cybersecurity industry often operates in an echo chamber and is calibrated to be highly reactive. The echo chamber spins up the machine by talking about Agentic AI [AI agents], AI drift, and other risks. And a whole new set of vendors then overwhelms the CISO portfolio,\u201d he explains. \u201cAI is merely an extension of existing technology. It serves as another lens through which we can bring our focus back to the essentials.\u201d<\/p>\n<p>When Hetner speaks of the essentials, he highlights the importance of understanding the business profile, pinpointing threats within the digital landscape, and discerning the interconnections among business units. From there, security leaders should assess the operational, legal, regulatory, and financial repercussions that could arise in the event of a breach or exposure. Then they should aggregate this information into a comprehensive risk profile to present to the executive team and board so they can determine what risks they\u2019re willing to accept, mitigate, and transfer.<\/p>\n<h2 class=\"wp-block-heading\">Protect the data<\/h2>\n<p>Given how AI is used to analyze financial, sales, HR, product development, customer relationship and other sensitive data, Martin-Vegue feels that data protection should be at the top of the risk manager\u2019s list of specific controls. This points back to knowing how employees use AI, for what functions, and the type of data they feed into the AI-enabled application.<\/p>\n<p>Or, as a May 2025 joint <a href=\"https:\/\/media.defense.gov\/2025\/May\/22\/2003720601\/-1\/-1\/0\/CSI_AI_DATA_SECURITY.PDF\">memo<\/a> on AI data security from security agencies across Australia, New Zealand, The UK and the US explains: Know what your data is, where it is, and where it\u2019s going.<\/p>\n<p>Of course, this is easier said than done, given that most organizations don\u2019t know where all their sensitive data is, let alone how to control it, according to multiple <a href=\"https:\/\/www.itgovernanceusa.com\/blog\/four-out-of-five-organizations-dont-know-where-their-sensitive-data-is-located\">surveys<\/a>. Yet, as with other new technologies, protecting data used in LLMs boils down to user education and <a href=\"https:\/\/www.csoonline.com\/article\/565258\/why-data-governance-should-be-corporate-policy.html\">data governance<\/a>, including traditional controls such as scanning and encryption.<\/p>\n<p>\u201cYour users may not understand the best ways to use these AI solutions, so cybersecurity and governance leaders need to help architect use cases and deployments that work for them and your risk management team,\u201d says Diana Kelley, long-time cybersecurity analyst and CISO at Protect AI.<\/p>\n<h2 class=\"wp-block-heading\">Protect the model<\/h2>\n<p>Kelley points out the differences in risk between various AI adoption and deployment models. Free, public versions of AI like ChatGPT, where the user plugs data into a web-based chat prompt, provide the least control over what happens with data that employees share with the interface. Paying for the professional version and bringing AI in-house gives enterprises more control, but enterprise licenses and self-hosting costs are often out of reach for small businesses. Another option involves running foundation models on managed cloud platforms like <a href=\"https:\/\/www.csoonline.com\/article\/2092006\/where-in-the-world-is-your-ai-identifying-and-securing-ai-across-a-hybrid-environment.html\">Amazon Bedrock<\/a> and other securely-configured cloud services, where the data is processed and analyzed within the account holder\u2019s protected environment.<\/p>\n<p>\u201cThis is not magic or little sparkles, even though AI is often represented that way in your applications. It\u2019s math. It\u2019s software. We know how to protect software. However, AI is a new kind of software that requires new types of security approaches and tools,\u201d Kelley adds. \u201cA model file is a different type of file, so you need a purpose-built scanner designed for their unique structure.\u201d<\/p>\n<p>A model file is a set of weights and biases, she continues, and when it is deserialized then organizations are running untrusted code. This makes models a primary target for model serialization attacks (MSAs) by cybercriminals wanting to manipulate target systems.<\/p>\n<p>In addition to MSA risks, AI models, especially those pulled from open source, can fall victim to <a href=\"https:\/\/www.csoonline.com\/article\/570173\/what-is-typosquatting-a-simple-but-effective-attack-technique.html\">typosquatting<\/a> attacks that mimic the names of trusted files but contain malware in them. They\u2019re also susceptible to neural backdoors and other supply chain vulnerabilities, which is why Kelley recommends scanning AI models before approving them for deployment and development.<\/p>\n<p>Because the LLMs supporting AI applications are different from traditional software, the need for different types of scanning and monitoring has led to a flood of <a href=\"https:\/\/www.techoperators.com\/landscape\">specialized solutions<\/a>. But signs point to this market contracting, as traditional security vendors start to pull in specialty tools, such as with Palo Alto Network\u2019s pending <a href=\"https:\/\/www.paloaltonetworks.com\/company\/press\/2025\/palo-alto-networks-announces-intent-to-acquire-protect-ai--a-game-changing-security-for-ai-company\">acquisition<\/a> of Protect AI.<\/p>\n<p>\u201cUnderstand how the AI tech works, know how your employees are using it, and build in controls,\u201d Kelley iterates. \u201cYes, there is a lot of work involved, but it doesn\u2019t have to be scary, and you don\u2019t need to believe the FUD. It\u2019s the way we do risk management.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>From risks in AI applications such as poisoned training data and hallucinations, to AI-enabled security, to deep fakes, user error, and novel AI-generated attack techniques, the cybersecurity industry is abuzz with dire security threats that are overwhelming CISOs. For example, during and after the RSA Conference in April 2025, attendees posted vociferously about the overload [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":3680,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-3679","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/3679"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3679"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/3679\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/3680"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3679"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3679"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3679"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}