{"id":4632,"date":"2025-09-02T07:00:00","date_gmt":"2025-09-02T07:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=4632"},"modified":"2025-09-02T07:00:00","modified_gmt":"2025-09-02T07:00:00","slug":"agentic-ai-a-cisos-security-nightmare-in-the-making","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=4632","title":{"rendered":"Agentic AI: A CISO\u2019s security nightmare in the making?"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Enterprises will no doubt be using agentic AI for a growing number of workflows and processes, including software development, customer support automation, <a href=\"https:\/\/www.cio.com\/article\/227908\/what-is-rpa-robotic-process-automation-explained.html\">robotic process automation (RPA)<\/a>, and employee support. Among the key questions for CISOs and their staffs: What are the cybersecurity risks of agentic AI, and how much more work will it take for them to support their organizations\u2019 agentic AI dreams?<\/p>\n<p>In a 2024 report noting how threat actors could leverage AI in 2025, Cisco Talos said AI systems and models that can act autonomously to achieve goals without the need for constant human guidance could imperil organizations that are neither prepared nor equipped to handle agentic systems and their potential for compromise.<\/p>\n<h5 class=\"wp-block-heading\"><strong>[ Related: <\/strong><a href=\"https:\/\/www.computerworld.com\/article\/3843138\/agentic-ai-ongoing-coverage-of-its-impact-on-the-enterprise.html\"><strong>Agentic AI \u2013 News and insights<\/strong><\/a><strong> ]<\/strong><\/h5>\n<p>\u201cAs agentic systems increasingly integrate with disparate services and vendors, the opportunity for exploitation or vulnerability is ripe,\u201d the report said. \u201cAgentic systems may also have the potential to conduct multi-stage attacks, find creative ways to access restricted data systems, chain seemingly benign actions into harmful sequences, or learn to evade detection by network and system defenders.\u201d<\/p>\n<p>It\u2019s clear that agentic AI will be a significant challenge for cybersecurity teams and that CISOs need to be part of the conversation as their organizations adopt the technology. This is especially true given that many companies have jumped into all aspects of AI <a href=\"https:\/\/www.csoonline.com\/article\/3529615\/companies-skip-security-hardening-in-rush-to-adopt-ai.html\">without giving much thought to hardening their systems<\/a>.<\/p>\n<p>With enterprises embracing agentic AI to <a href=\"https:\/\/www.cio.com\/article\/3829620\/how-to-know-a-business-process-is-ripe-for-agentic-ai.html\">enhance efficiency, decision-making, and automation<\/a>, \u201cthey must also confront a new class of cyber risks,\u201d says <a href=\"https:\/\/www.pwc.com\/gx\/en\/contacts\/s\/sean-joyce.html\">Sean Joyce<\/a>, global cybersecurity and privacy leader at consulting firm PwC. \u201cUnlike traditional AI models that respond to direct prompts, agentic AI systems can act autonomously toward high-level goals, make decisions, interact with other systems, and adapt their behavior over time.\u201d<\/p>\n<p>For CISOs, \u201cunderstanding and mitigating these emerging risks is critical to fostering safe and responsible AI adoption,\u201d Joyce says.<\/p>\n<p>Here are some of the key issues and risks involved.<\/p>\n<h2 class=\"wp-block-heading\">Lack of visibility and the rise of shadow AI<\/h2>\n<p>CISOs don\u2019t like operating in the dark, and this is one of the risks agentic AI brings. It can be deployed autonomously by teams or even individual users through a variety of applications without proper oversight from security and IT departments.<\/p>\n<p>This creates \u201cshadow AI agents\u201d that can operate without controls such as authentication, which makes it difficult to track their actions and behavior. This in turn can pose significant security risks, because unseen agents can introduce vulnerabilities.<\/p>\n<p>\u201cA lack of visibility creates several risks for organizations, including security risks, governance\/compliance issues, operational risks, and a lack of transparency [that] can lead to a loss of trust by employees, vendors, etc., and hinder AI adoption,\u201d says <a href=\"https:\/\/www.cm.law\/people\/reena-richtermeyer\/\">Reena Richtermeyer<\/a>, partner at CM Law who represents AI developers and large corporations and government entities implementing AI.<\/p>\n<p>\u201cThe biggest issue we see truly is a lack of visibility,\u201d says <a href=\"https:\/\/nwai.co\/author\/wyatt\/\">Wyatt Mayham<\/a>, lead AI consultant at consulting firm Northwest AI. Agentic AI often starts on the edge where individuals are setting up ChatGPT and other tools to automate tasks, he says.<\/p>\n<p>\u201cAnd these agents aren\u2019t sanctioned or reviewed by IT, which means they\u2019re not logged, versioned, or governed,\u201d Mayham says. CISOs are accustomed to shadow software-as-a-service (SaaS), he says, and now they need to <a href=\"https:\/\/www.cio.com\/article\/2150142\/10-ways-to-prevent-shadow-ai-disaster.html\">contend with shadow AI behavior<\/a>.<\/p>\n<p>\u201cOrganizations frequently lack awareness of where these systems are implemented, who is using them, and the extent of their autonomy,\u201d says <a href=\"https:\/\/www.linkedin.com\/in\/dr-pablo-riboldi\/\">Pablo Riboldi<\/a>, CISO of BairesDev, a nearshore software development company. \u201cThis results in a significant shadow-IT issue, as security teams might remain oblivious to agents that are making real-time decisions or accessing sensitive systems without centralized control.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Free agents: Autonomy breeds increased risks<\/h2>\n<p>Agentic AI introduces the <a href=\"https:\/\/www.cio.com\/article\/4003880\/how-ai-agents-and-agentic-ai-differ-from-each-other.html\">ability to make independent decisions<\/a> and act without human oversight. This capability presents its own cybersecurity risk by potentially leaving organizations vulnerable.<\/p>\n<p>\u201cAgentic AI systems are goal-driven and capable of making decisions without direct human approval,\u201d Joyce says. \u201cWhen objectives are poorly scoped or ambiguous, agents may act in ways that are misaligned with enterprise security or ethical standards.\u201d<\/p>\n<p>For example, if an agent is told to reduce \u201cnoise\u201d in the security operations center, it might interpret this too literally and suppress valid alerts in its effort to streamline operations, leaving an organization blind to an active intrusion, Joyce says.<\/p>\n<p>Agentic AI systems are designed to act independently, but without strong governance, this autonomy can quickly become a liability, Riboldi says. \u201cA seemingly harmless agent given vague or poorly scoped instructions might overstep its boundaries, initiating workflows, altering data, or interacting with critical systems in unintended ways,\u201d he says.<\/p>\n<p>In an agentic AI environment, \u201cthere is a lot of autonomous action without oversight,\u201d Mayham says. \u201cUnlike traditional automation, agents make choices that could mean clicking links, sending emails, triggering workflows. And this is all based on probabalistic reasoning. When those choices go wrong it\u2019s hard to construct why. We\u2019ve seen [clients] of ours accidently exposing sensitive internal URLs by misunderstanding what safe-to-share means.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Multi-agent systems: Unwanted data-sharing consequences<\/h2>\n<p>Multi-agented systems hold great promise for the enterprise, but AI agents interacting and sharing data with one another introduce risks related to security, privacy, and the potential for unintended consequences, CM Law\u2019s Richtermeyer says. \u201cThese risks stem from the AI\u2019s ability to access vast amounts of data, their autonomous nature, and the complexity of managing multi-agent AI systems,\u201d she says.<\/p>\n<p>For example, AI agents can access and process sensitive information that might be governed contractually or heavily regulated, leading to an authorized use or disclosure that creates potential liability for an organization, Richtermeyer says.<\/p>\n<p>\u201cAs soon as you have a multi-agent setup, you introduce coordination risk,\u201d Northwest AI\u2019s Mayham says. \u201cOne agent might expand the scope of a task in a way another agent wasn\u2019t trained to handle. Without sandboxing, this can lead to unpredictable system behavior, especially if the agents are ingesting fresh real-world data.\u201d<\/p>\n<p>Agents often collaborate with other agents to complete tasks, resulting in complex chains of communication and decision-making, PwC\u2019s Joyce says. \u201cThese interactions can propagate sensitive data in unintended ways, creating compliance and security risks,\u201d he says.<\/p>\n<p>For example, a customer service agent summarizes account details for an internal agent handling retention analysis. That second agent stores the data in an unprotected location for later use, violating internal data handling policies.<\/p>\n<h2 class=\"wp-block-heading\">Third-party integration: Supercharging supply-chain risks<\/h2>\n<p>Agents can also potentially integrate and share data with third-party partners\u2019 applications via APIs, presenting yet another challenge for CISOs, as integration with disparate services and vendors can create increased opportunity for exploitation or vulnerability.<\/p>\n<p>Agentic AI relies heavily on APIs and external integrations, Riboldi says. \u201cAs an agent gains access to more systems, its behavior becomes increasingly complex and unpredictable,\u201d he says. \u201cThis scenario introduces supply chain risks, as a vulnerability in any third-party service could be exploited or inadvertently triggered through agentic interactions across different platforms.\u201d<\/p>\n<p>Many early stage agents rely on brittle or undocumented APIs or browser automation, Mayham says. \u201cWe\u2019ve seen cases where agents leak tokens via poorly scoped integrations, or exfiltrate data through unexpected plugin chains. The more fragmented the vendor stack, the bigger the surface area for something like this to happen,\u201d he says. \u201cThe AI coding tools are notorious for this.\u201d<\/p>\n<p>\u201cEach integration point expands the attack surface and may introduce supply-chain vulnerabilities,\u201d Joyce says. \u201cFor example,an AI agent integrates with a third-party HR platform to automate onboarding. The vendor\u2019s API has a known vulnerability, which an attacker exploits to gain lateral access to internal HR systems.\u201d<\/p>\n<p>Many agentic tools rely on open-source libraries and orchestration frameworks, which might <a href=\"https:\/\/www.csoonline.com\/article\/4015077\/ai-supply-chain-threats-are-looming-as-security-practices-lag.html\">harbor vulnerabilities unknown to security teams<\/a>, Joyce adds.<\/p>\n<h2 class=\"wp-block-heading\">Multi-stage attacks: Blurring the line between error and exploitation<\/h2>\n<p>There is a potential for agentic systems to conduct multi-stage attacks and find new ways to access restricted data systems by evading detection by security tools.<\/p>\n<p>\u201cAs agentic systems become more sophisticated, they may inadvertently develop or learn multi-step behaviors that mimic multi-stage attacks,\u201d Riboldi says. \u201cWorse, they might unintentionally discover ways to bypass traditional detection methods \u2014 not because they are malicious, but because their goal-oriented behavior rewards evasion.\u201d<\/p>\n<p>This blurs the line between error and exploitation, Riboldi says, and makes it harder for security teams to tell whether an incident was malicious, emergent behavior, or both.<\/p>\n<p>This type of risk \u201cis less theoretical than it sounds,\u201d Mayham says. \u201cIn lab tests, we\u2019ve seen agents chain tools together in unexpected ways, and not really maliciously but rather creatively really. Now imagine that same reasoning ability being exploited to probe systems, test endpoints, and avoid pattern-based detection tools.\u201d<\/p>\n<p>Because agentic AI can learn from feedback, it might alter its behavior to avoid triggering detection systems \u2014 intentionally or unintentionally, Joyce says. \u201cThis presents a serious challenge for traditional rule-based detection and response tools,\u201d he says. An agent could determine that certain actions trigger alerts from an endpoint detection platform, and adjust its method to stay under detection thresholds, similar to how malware adapts to antivirus scans.<\/p>\n<h2 class=\"wp-block-heading\">A new paradigm requires new defense models<\/h2>\n<p>Agentic AI represents a powerful new model, Joyce says, but also a radically different cybersecurity challenge. \u201cIts autonomy, adaptability, and interconnectivity make it both a productivity multiplier and a potential attack vector,\u201d he says. \u201cFor CISOs, traditional security models are no longer sufficient.\u201d<\/p>\n<p>According to Joyce, a robust agentic AI defense strategy must include the following fundamentals:<\/p>\n<p>Real-time observability and telemetry<\/p>\n<p>Tightly scoped governance policies<\/p>\n<p>Secure-by-design development practices<\/p>\n<p>Cross-functional coordination between security, IT, data management, and compliance teams<\/p>\n<p>\u201cBy adopting a proactive, layered security approach and embedding governance from the start, organizations can safely harness the promise of agentic AI while minimizing the risks it brings,\u201d he says.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Enterprises will no doubt be using agentic AI for a growing number of workflows and processes, including software development, customer support automation, robotic process automation (RPA), and employee support. Among the key questions for CISOs and their staffs: What are the cybersecurity risks of agentic AI, and how much more work will it take for [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":4625,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-4632","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/4632"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4632"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/4632\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/4625"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4632"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4632"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4632"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}