{"id":2792,"date":"2025-04-17T08:00:00","date_gmt":"2025-04-17T08:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=2792"},"modified":"2025-04-17T08:00:00","modified_gmt":"2025-04-17T08:00:00","slug":"cisos-no-closer-to-containing-shadow-ais-skyrocketing-data-risks","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=2792","title":{"rendered":"CISOs no closer to containing shadow AI\u2019s skyrocketing data risks"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Generative AI\u2019s many benefits come with the drawback of data security risks, primarily through shadow AI use and the leakage of sensitive information.<\/p>\n<p>These risks are being compounded in the enterprise as workers often use private gen AI accounts to process sensitive data.<\/p>\n<p><a><\/a>While most organizations (90%) offer sanctioned generative AI apps and even more (98%) offer their users apps that incorporate gen AI features, unauthorized use of AI services is skyrocketing in business, according to a <a href=\"https:\/\/www.netskope.com\/netskope-threat-labs\/cloud-threat-report\/generative-ai-2025\">study by Netskope<\/a>.<\/p>\n<p><a><\/a>Most gen AI use in the enterprise (72%) is shadow IT, driven by individuals using personal accounts to access AI apps. These forms of private account AI usage often go untracked by security teams and untouched by enterprise security policy constraints.<\/p>\n<p>Netskope found the amount of data sent to gen AI apps in prompts and uploads has increased more than 30-fold over the past year, increasing volumes of sensitive data exposure, especially source code, regulated data, intellectual property, and secrets.<\/p>\n<p>The volume <a><\/a>of data increased from 250MB to 7.7GB <a><\/a>per month, mainly in the form of prompts and uploads, despite the apps being used by a relatively small population (4.9% of enterprise users).<\/p>\n<p>A separate <a href=\"https:\/\/www.harmonic.security\/blog-posts\/new-research-the-data-leaking-into-genai-tools\">study from Harmonic Security<\/a> found that 8.5% of employee prompts to popular LLMs \u2014 including ChatGPT, Gemini, and Claude \u2014 during Q4 2024 included sensitive data. Customer data, including billing information and authentication data, accounted for nearly half the sensitive, leaked data. <a href=\"https:\/\/www.csoonline.com\/article\/3819170\/nearly-10-of-employee-gen-ai-prompts-include-sensitive-data.html\">Legal and financial data accounted for 15% of the exposed information<\/a>, while security-related data (pen-test results, etc.) made up a concerning 7%.<\/p>\n<h2 class=\"wp-block-heading\">Lack of oversight<\/h2>\n<p>Shadow AI refers to the <a href=\"https:\/\/www.cio.com\/article\/647725\/it-leaders-grapple-with-shadow-ai.html\">unauthorized use of AI services<\/a> within organizations that goes untracked by security teams and unmanaged by policy limitations. Nearly every organization that fails to implement an <a href=\"https:\/\/www.csoonline.com\/article\/3950176\/10-things-you-should-include-in-your-ai-policy.html\">AI acceptable use policy<\/a> is at risk of losing sensitive internal data through this route, security and AI experts told CSO.<\/p>\n<p>The risks associated with shadow AI include, but are not limited to, data leakage, along with regulatory and compliance risks for users\u2019 personal data.<\/p>\n<p>\u201cEmployees use generative AI tools without IT oversight, often pasting sensitive data into personal accounts or relying on unvetted code suggestions,\u201d said James McQuiggan, security awareness advocate at KnowBe4. \u201cThese actions can increase the risk of data leakage, compliance violations, and weakened software integrity, all without the user realizing the impact.\u201d<\/p>\n<p>David Brauchler, technical director at global cybersecurity company NCC Group, told CSO that shadow AI has become an inevitability that security leaders must address.<\/p>\n<p>\u201cEmployees find AI useful, and without a sanctioned, approved way to leverage its capabilities, organizations may quickly find sensitive data in the hands of third parties,\u201d Brauchler warned. \u201cThis data can find its way into training datasets or can even be directly exposed to attackers through bugs and breaches, as has occurred more than once.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Governance risk<\/h2>\n<p>Laura Ellis, VP of data and AI at Rapid7, warned that shadow AI poses significant data governance risks for enterprises.<\/p>\n<p>\u201cThe unsanctioned use of AI tools can lead to inadvertent exposure of sensitive company or even customer information, and this creates potential compliance and security risks,\u201d Ellis warned. \u201cAdditionally, relying on unvetted AI outputs increases the risk of factual inaccuracies, which can negatively impact brand credibility and trust.\u201d<\/p>\n<p>Other experts characterized the use of AI as something of a poorly-regulated, anything-goes environment.<\/p>\n<p>\u201cData breaches, IP theft, and regulatory fines aren\u2019t hypotheticals \u2014 they\u2019re the inevitable result of using unapproved AI,\u201d warned Bharat Mistry, field CTO at global cybersecurity vendor Trend MicroI. \u201cMany of these tools operate in a legal and compliance gray area, ignoring industry-specific regulations and data protection laws entirely.\u201d<\/p>\n<p>Mistry added: \u201cTo make matters worse, IT and security teams are left trying to chase shadows. With a growing number of unauthorized tools being used across departments, visibility, control, and risk management go out the window.\u201d<\/p>\n<p>Cheney Hamilton, a specialist researcher at industry analyst Bloor Research, warned that gen AI tools are rapidly being embedded into workflows but often without oversight \u2014 developments that parallel the rise of shadow IT systems more generally \u2014 and creating similar risks in the process.<\/p>\n<p>\u201cThe risk isn\u2019t just technical, it\u2019s behavioral,\u201d Hamilton said. \u201cEmployees are using gen AI tools to get work done faster, but without clear parameters, sensitive data is being exposed in ways that traditional security frameworks aren\u2019t catching.\u201d<\/p>\n<p>Hamilton added: \u201cWhat\u2019s needed now is a shift from reactive controls to proactive AI governance embedded into workforce policies, job design, and even leadership structure because gen AI shouldn\u2019t sit solely under IT or infosec; it needs cross-functional ownership from HR, legal, and compliance, too.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Risk mitigation<\/h2>\n<p>The explosion of AI adoption through tools such as ChatGPT, Google Gemini, and GitHub Copilot is creating a cybersecurity governance challenge that traditional approaches and tools are ill equipped to contain.<\/p>\n<p>Experts told CSO that security leaders need to employ a combination of clear AI governance policies, regular <a href=\"https:\/\/www.csoonline.com\/article\/3844225\/how-owasps-guide-to-generative-ai-red-teaming-can-help-teams-build-a-proactive-approach.html\">red teaming of AI systems<\/a> to identify vulnerabilities, as well as <a href=\"https:\/\/www.csoonline.com\/article\/3604803\/security-awareness-training-topics-best-practices-costs-free-options.html\">comprehensive employee awareness training<\/a> to mitigate the risks associated with shadow AI.<\/p>\n<p>These measures should include:<\/p>\n<p><strong>Real-time monitoring:<\/strong> Security leaders should deploy systems to track and manage data input into generative Al (and Al-enabled SaaS) tools.<\/p>\n<p><strong>Sanctioned AI lists:<\/strong> CISOs should ensure that approved AI vendors contractually protect the business\u2019 data privacy and that AI solutions outside the approved list are monitored or blocked.<\/p>\n<p><strong>App plan identification:<\/strong> Security leaders should ensure employees are using paid plans, or plans that do not train on input data.<\/p>\n<p><strong>Prompt-level visibility:<\/strong> Security teams need full visibility into what data is being shared into these tools \u2014 simply monitoring usage is not enough.<\/p>\n<p><strong>Sensitive data classification:<\/strong> Security systems must be able to identify sensitive data at the point of data loss.<\/p>\n<p><strong>Smart rule enforcement:<\/strong> CISOs should work with business leaders to create sanctioned workflows that shape how various departments or groups can engage with gen Al tools.<\/p>\n<p><strong>User education:<\/strong> Employees must be trained on the risks and best practices for using Al responsibly.<\/p>\n<p><strong>Establish use policies:<\/strong> Security leaders must work with business leaders to define how AI should be used, including what classes of internal data can be sent to approved vendors. Well-defined off-limits use cases should be established.<\/p>\n<p>In general, security teams should be monitoring the movement of data within their organization and identifying key sources of risk, AI or otherwise. AI watermarking may help identify AI-generated content but does not prevent sensitive information from being lost in the first place.<\/p>\n<p><a href=\"https:\/\/www.csoonline.com\/article\/569559\/what-is-dlp-how-data-loss-prevention-software-works-and-why-you-need-it.html\">Data loss prevention (DLP)<\/a> can help identify the export of at-risk information, but some experts argue the technology is limited as a means for constraining leaks through gen AI tools.<\/p>\n<p>Peter Garraghan, CEO and co-founder at Mindgard, an AI security testing company, warned that generative AI introduces a new class of risks that go beyond what conventional controls such as blocking, DLP, and real-time coaching can effectively manage.<\/p>\n<p>\u201cThe issue lies in the sophistication and opacity \u2014 or black-box nature \u2014 of modern AI systems,\u201d Garraghan explained. Sensitive information can be ingested, transformed, and even obfuscated within an AI model or application before it is output to the user.<\/p>\n<p>Garraghan continued: \u201cIn these cases, standard controls have limited means of recognizing the underlying data or context, meaning potentially sensitive information could be exfiltrated without triggering any alerts.\u201d<\/p>\n<p>To truly secure generative AI, organizations need a layer of protection purpose-built for this new paradigm. This includes security testing tools that can surface and evidence the existence of these vulnerabilities alongside runtime detection of AI-specific vulnerabilities.<\/p>\n<p>\u201cThese are issues that only surface during model execution, such as data leakage through embedding or encoding,\u201d Garraghan, a professor of computer science at the UK\u2019s Lancaster University, added.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Generative AI\u2019s many benefits come with the drawback of data security risks, primarily through shadow AI use and the leakage of sensitive information. These risks are being compounded in the enterprise as workers often use private gen AI accounts to process sensitive data. While most organizations (90%) offer sanctioned generative AI apps and even more [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":2793,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-2792","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/2792"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2792"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/2792\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/2793"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2792"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2792"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2792"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}