{"id":3463,"date":"2025-06-05T13:05:00","date_gmt":"2025-06-05T13:05:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=3463"},"modified":"2025-06-05T13:05:00","modified_gmt":"2025-06-05T13:05:00","slug":"cisos-beware-genai-use-is-outpacing-security-controls","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=3463","title":{"rendered":"CISOs beware: genAI use is outpacing security controls"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Employees in every organization use an average of 6.6 high-risk generative AI applications \u2013 including some unknown to CISOs \u2014 says Palo Alto Networks in a new study.<\/p>\n<p>But, an expert says, that estimate is low. \u201cI think it\u2019s probably worse,\u201d said Joseph Steinberg, a cybersecurity and AI expert. \u201cIn a major company it\u2019s got to be higher than that.\u201d<\/p>\n<p>In fact, he predicts the number of risky AI apps in the enterprise is only going to grow.<\/p>\n<p>That means that CISOs need to do a risk assessment of every genAI app employees are using, he said in an interview, and then set policies and procedures staff have to follow.<\/p>\n<p>He warned CISOs and CEOs against following \u2018the Ostrich algorithm\u2019 \u2013 pretending the danger doesn\u2019t exist by ignoring, if not rewarding, the shadow use of AI by employees, either in the office or at home.<\/p>\n<p>\u201cThere\u2019s no question there\u2019s a tremendous amount of use of generative AI apps being used in ways that are highly problematic for the organization,\u201d he said. \u201cRemember, I can use a genAI app from my personal computer that my company has no control over, and still leak a tremendous amount of data just from what I\u2019m asking \u2013 and it may not be only what I\u2019m asking, but what others are also asking, and the generative AI learns from the pattern of questions.<\/p>\n<p>\u201cIt\u2019s hard to block that, because the risk can\u2019t be completely controlled by the organization, because someone can do it on their own time from their own machine.\u201d<\/p>\n<p>And organizations sometimes deliberately or inadvertently reward employees for using unapproved genAI apps, he added, for example, by applauding a report that\u2019s just too good.<\/p>\n<p>\u201cLet\u2019s be honest,\u201d he said. \u201cMany of the companies that ban generative AI are rewarding their employees [for using it]. They\u2019ll never admit it. But if you\u2019re getting reviewed based on your performance, and your performance is enhanced by using shadow IT or AI on your own machine on your own time, if you\u2019re not being punished, you\u2019re not going to stop.\u201d<\/p>\n<p>Steinberg was commenting on <a href=\"https:\/\/www.paloaltonetworks.com\/resources\/research\/state-of-genai-2025\" target=\"_blank\" rel=\"noopener\">a study released Thursday <\/a>by Palo Alto Networks (PAN) on the popularity of genAI in organizations. It analyzed traffic logs from just over 7,000 PAN customers during the 12 months of 2024 to detect use of software-as-a-service apps such as ChatGPT, Microsoft Copilot, Amazon Bedrock and more. It also included a separate look at anonymized data from its customers\u2019 loss prevention incidents from the first three months of this year.<\/p>\n<p>It observed:<\/p>\n<p>on average, most organizations will see a total of 66 genAI apps in their environments. The bulk of those among PAN customers were \u201cwriting assistants\u201d (34% of the sample. The biggest in this category was Grammarly); \u201cconversational agents\u201d (just under 29%, apps such as Microsoft Copilot, ChatGPT and Google Gemini); \u201centerprise search\u201d apps\u00a0 (just over 10% of the sample) and \u201cdeveloper platform\u201d apps (just over 10%). These four alone make up 84% of the genAI apps seen;<\/p>\n<p>10% of genAI apps are called \u2018high-risk\u2019 because, according to customer telemetry, access to them was restricted or blocked by customers at some point or points during the study period;<\/p>\n<p>data loss prevention (DLP) incidents for genAI detected by PAN more than doubled this year compared to 2024.<\/p>\n<p>Writing assistants aren\u2019t applications to be taken lightly, the report warns. \u201cIf an AI writing assistant is integrated into an organization\u2019s systems without proper security controls, it could become a vector for cyberattacks. Hackers could exploit weaknesses in the genAI app to gain access to internal systems or sensitive data.\u201d<\/p>\n<p>\u201cAs genAI adoption grows, so do its risks,\u201d it says. \u201cWithout visibility into genAI apps, and their broader AI ecosystems, businesses can risk exposing sensitive data, violating regulations, and losing control of intellectual property. Monitoring AI interactions is no longer optional. It\u2019s critical for helping prevent shadow AI adoption, enforcing security policies, and enabling responsible AI use.\u201d<\/p>\n<p>The report identifies these genAI security best practices for CISOs:<\/p>\n<p>understand genAI usage and control in the enterprise and what is allowed. Implement conditional access management to limit access to genAI platforms, apps, and plugins based on users and\/or groups, location, application risk, compliant devices, and legitimate business rationale;<\/p>\n<p>guard sensitive data from unauthorized access and leakage through real-time content inspection with centralized policy enforcement across the infrastructure and within data security workflows to help prevent unauthorized access and sensitive data leakage;<\/p>\n<p>defend against modern AI-based cyberthreats through a zero trust security framework to identify and block highly sophisticated, evasive, and stealthy malware and threats within genAI responses.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Employees in every organization use an average of 6.6 high-risk generative AI applications \u2013 including some unknown to CISOs \u2014 says Palo Alto Networks in a new study. But, an expert says, that estimate is low. \u201cI think it\u2019s probably worse,\u201d said Joseph Steinberg, a cybersecurity and AI expert. \u201cIn a major company it\u2019s got [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":3455,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-3463","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/3463"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3463"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/3463\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/3455"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3463"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3463"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3463"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}