{"id":6324,"date":"2025-12-23T07:00:00","date_gmt":"2025-12-23T07:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=6324"},"modified":"2025-12-23T07:00:00","modified_gmt":"2025-12-23T07:00:00","slug":"agentic-ai-already-hinting-at-cybersecuritys-pending-identity-crisis","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=6324","title":{"rendered":"Agentic AI already hinting at cybersecurity\u2019s pending identity crisis"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Making the most of agentic AI is a top agenda item for many enterprises the coming year, as business executives are keen to deploy autonomous AI agents to revamp a range of business operations and workflows.<\/p>\n<p>The technology is nascent and, as with generative AI rollouts, CIOs are under pressure to move quickly with agentic AI strategies \u2014 a <a href=\"https:\/\/www.csoonline.com\/article\/4047974\/agentic-ai-a-cisos-security-nightmare-in-the-making.html\">potential nightmare in making for CISOs<\/a> charged with ensuring organizational security in the <a href=\"https:\/\/www.csoonline.com\/article\/3529615\/companies-skip-security-hardening-in-rush-to-adopt-ai.html\">face of widespread agentic experimentation<\/a> and deployment.<\/p>\n<p>A key area of concern is identity and authentication. Some security experts estimate that more than 95% of enterprises deploying or experimenting with autonomous agents are doing so without leveraging existing cybersecurity mechanisms \u2014 such as <a href=\"https:\/\/www.csoonline.com\/article\/567339\/what-is-pki-and-how-it-secures-just-about-everything-online.html\">public key infrastructure (PKI)<\/a> \u2014 to track, identify, and control their agents.<\/p>\n<p>This issue becomes even more dangerous due to the prevalence of <a href=\"https:\/\/www.cio.com\/article\/3991302\/ai-protocols-set-standards-for-scalable-results.html\">agent-to-agent communications<\/a> common to agentic AI rollouts.<\/p>\n<p>For agentic AI to work, AI agents must communicate autonomously with other agents to pass tasks, data, and context. Without sufficient <a href=\"https:\/\/www.csoonline.com\/article\/518296\/what-is-iam-identity-and-access-management-explained.html\">identity management<\/a>, authentication, and related cybersecurity measures in place, not only could an agent be controlled by a cybercriminal or state actor, but rogue agents could engage in a variety of prompt injection attacks with an unlimited number of legitimate agents.<\/p>\n<p>Should a hijacked agent communicate with an enterprise\u2019s legitimate agents, detecting it and pulling its credentials will not be enough to halt the damage from legitimate agents following the rogue agent\u2019s previous instructions.\u00a0<\/p>\n<p>And the likelihood of this knock-on effect isn\u2019t trivial. Most robust authentication mechanisms today revoke and\/or shut down credentials when bad behavior is detected.\u00a0But behavioral analytics systems often need to witness acts of bad behavior before they can flag the problem to terminate the ID.\u00a0Any actions previously initiated by the compromised agent will already be in motion across the agentic chain.<\/p>\n<p>Having a trail of every interaction and an automated system for contacting all legitimate agents that interacted with the rogue agent to tell them to disregard instructions from that agent \u2014 and alert IT security of any actions already taken on the rogue\u2019s instructions \u2014 is the goal, but vendors have yet to address this need. Moreover, many security experts argue it\u2019s too complex a problem to easily solve.<\/p>\n<p>\u201cBecause autonomous agents are increasingly able to execute real actions within an organization, if a malicious actor can affect the decision-making layer of an autonomous agent, the resulting damage could be exponentially greater than in a traditional breach scenario,\u201dsays <a href=\"https:\/\/www.linkedin.com\/in\/nikkale\">Nik Kale<\/a>, principal engineer at Cisco as well as a member of the Coalition for Secure AI (CoSAI) and ACM\u2019s AI Security (AISec)\u00a0program committee.<\/p>\n<h2 class=\"wp-block-heading\">The ever-expanding attack surface of autonomous agents<\/h2>\n<p>\u201cBecause agents are programmed to follow instructions, they will likely follow a questionable instruction absent some mechanism to force the agent to slow its process to validate the safety of the request,\u201d Kale says. \u201cHumans have intuition and therefore often sense when something does not feel right. Agents do not possess this instinctual sense and thus will follow any request unless the system specifically prevents them from doing so.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/gwlongsine\/\">Gary Longsine<\/a>, CEO at IllumineX, agrees that the cybersecurity risks from uncontrolled agentic deployment is unlike anything CISOs have faced.<\/p>\n<p>\u201cThe attack surface of the AI agent could be thought of as essentially infinite, due to the natural language interface and the ability of the agent to summon a potentially vast array of other agentic systems,\u201d Longsine says.<\/p>\n<p>DigiCert CTO <a href=\"https:\/\/www.digicert.com\/blog\/author\/jason-sabin\">Jason Sabin<\/a> suggests the situation may be even worse because of how relatively easy it is to perform an agent hijacking.<\/p>\n<p>\u201cWithout robust agentic authentication, organizations risk deploying autonomous systems that can be hijacked with a single fake instruction,\u201d Sabin claims.\u00a0<\/p>\n<h2 class=\"wp-block-heading\">Agentic AI\u2019s identity crisis<\/h2>\n<p>Authentication and agentic experts interviewed \u2014 three of whom estimate that less than 5% of enterprises experimenting with autonomous agents have deployed agentic identity systems \u2014 say the reasons for this lack of security hardening are varied.<\/p>\n<p>First, many of these efforts are <a href=\"https:\/\/www.csoonline.com\/article\/3964282\/cisos-no-closer-to-containing-shadow-ais-skyrocketing-data-risks.html\">effectively shadow IT<\/a>, where a line of business (LOB) executive has authorized the proof of concept to see what these agents can do. In these cases, IT or cyber teams haven\u2019t likely been involved, and so security hasn\u2019t been a top priority for the POC.<\/p>\n<p>Second, many executives \u2014 including third-party business partners handling supply chain, distribution, or manufacturing \u2014 have historically cut corners for POCs because they are traditionally confined to sandboxes isolated from the enterprise\u2019s live environments.\u00a0<\/p>\n<p>But agentic systems don\u2019t work that way. To test their capabilities, they typically need to be released into the general environment.\u00a0<\/p>\n<p>The proper way to proceed is for every agent in your environment \u2014 whether IT authorized, LOB launched, or that of a third party \u2014 to be tracked and controlled by PKI identities from agentic authentication vendors. Extreme defense would include instructing all authorized agents to refuse communication from any agent without full identification. Unfortunately, autonomous agents \u2014 like their gen AI cousins \u2014 <a href=\"https:\/\/www.computerworld.com\/article\/4104814\/the-biggest-ai-mistake-pretending-guardrails-will-ever-protect-you.html\">often ignore instructions (aka guardrails)<\/a>.\u00a0<\/p>\n<p>\u201cAgentic-friendly encounters conflict with essential security principles. Enterprises cannot risk scenarios where agents autonomously discover each other, establish communication channels, and form transactional relationships,\u201d says <a href=\"https:\/\/www.tcs.com\/insights\/authors\/dr-kanwar-preet-singh-sandhu\">Kanwar Preet Singh Sandhu<\/a>, who tracks cybersecurity strategies for Tata Consultancy Services.<\/p>\n<p>\u201cWhen IT designs a system, its tasks and objectives should be clearly defined and restricted to those duties,\u201d he adds. \u201cWhile agent-to-agent encounters are technically possible, they pose serious risks to principles like least privilege and segregation of duties.For structured and planned collaboration or integration, organizations must follow stringent protocols such as MCP [Model Context Protocol] and A2A [Agent to Agent], which were created precisely for this purpose.\u201d<\/p>\n<p>DigiCert\u2019s Sabin says his interactions with enterprises revealed \u201clittle to none\u201d creating identities for their autonomous agents. \u201cDefinitely less than 10%, probably less than 5%. There is a huge gap in identity.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Agentic IDs: Putting the genie back in the bottle<\/h2>\n<p>Once agentic experiments begin without proper identities established, it\u2019s far more difficult to add identity authentication later, Sabin notes.<\/p>\n<p>\u201cHow do we start adding in identity after the fact? They don\u2019t have these processes established. The agent can and will be hijacked, compromised. You have to have a kill switch,\u201d he says. \u201cAI agents\u2019 ability to verify who is issuing a command and whether that human\/system has authority is one of the defining security issues of agentic AI.\u201d<\/p>\n<p>To address that issue, CISOs will likely need to <a href=\"https:\/\/www.csoonline.com\/article\/4089732\/rethinking-identity-for-the-ai-era-cisos-must-build-trust-at-machine-speed.html\">rethink identity, authentication, and privilege<\/a>.\u00a0<\/p>\n<p>\u201cWhat is truly challenging about this is that we are no longer determining how a human authenticates to a system. We are now asked to determine how an autonomous agent determines that the individual providing instructions is legitimate and that the instructions are within the expected pattern of action,\u201d Cisco\u2019s Kale says. \u201cThe shift to determining legitimacy based on the autonomous agent\u2019s assessment of the human\u2019s intent, rather than simply identifying the human, introduces a whole new range of risk factors that were never anticipated by traditional authentication methods.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/ishraqkhann\/\">Ishraq Khan<\/a>, CEO of coding productivity tool vendor Kodezi, also believes CISOs are likely underestimating the <a href=\"https:\/\/www.csoonline.com\/article\/3574697\/beyond-chatgpt-the-rise-of-agentic-ai-and-its-implications-for-security.html\">security threats that exist within agentic AI systems<\/a>.<\/p>\n<p>\u201cTraditional authentication frameworks assume static identities and predictable request patterns. Autonomous agents create a new category of risk because they initiate actions independently, escalate behavior based on memory, and form new communication pathways on their own. The threat surface becomes dynamic, not static,\u201d Khan says. \u201cWhen agents update their own internal state, learn from prior interactions, or modify their role within a workflow, their identity from a security perspective changes over time. Most organizations are not prepared for agents whose capabilities and behavior evolve after authentication.\u201d<\/p>\n<p>Khan adds: \u201cA compromised agent can impersonate collaboration patterns, fabricate system state or manipulate other agents into cascading failures. This is not simply malware. It is a behavioral attack on decision-making.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/harishperi\">Harish Peri<\/a>, SVP and general manager of AI Security at Okta, puts it more directly: \u201cThis is not just an <a href=\"https:\/\/www.csoonline.com\/article\/3476130\/nhis-may-be-your-biggest-and-most-neglected-security-hole.html\">NHI problem<\/a>. This is a recipe for disaster. It is a new kind of identity, a new kind of relentless user.\u201d<\/p>\n<p>Regarding the problem of being unable to undo the damage when a hijacked agent gives malicious instructions to legitimate agents, Peri says it can be a challenging problem that no one seems to have solved yet.<\/p>\n<p>\u201cIf the risk signal is strong enough, we do have the capability to revoke not just the privilege but the access token,\u201d Peri says. But \u201cthe real-time kind of chaining requires more thought.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Unwinding agent interactions will be a tall order<\/h2>\n<p>One issue is that tracking interactions for backward chaining will require a massive amount of data to be captured from every agent in the enterprise environment. And given that autonomous agents act at non-human speed, a data warehouse for that activity will likely fill up quickly.<\/p>\n<p>\u201cBy the time the agent does something and identity gets revoked, all of the downstream agents have already interacted with that compromised agent. They have already accepted assignments and have already cued up its next step actions,\u201d Cisco\u2019s Kale explains. \u201cThere is no mechanism to propagate that revocation backwards. Kill switches are necessary but they are incomplete.\u201d<\/p>\n<p>The process to go backwards to all contacted agents \u201csounds like a straightforward script. It looks easy until you try and do it properly,\u201d he says. \u201cYou need to know every instruction an agent has issued and the hard part is deciding what to undo\u201d \u2014 a scenario Kale likens to alert fatigue. \u201cThis could absolutely collapse from its own weight. This could all become noise and not security at that point.\u201d<\/p>\n<p><a href=\"https:\/\/www.sectigo.com\/contributors\/jason-soroko\">Jason Soroko<\/a>, a senior fellow at Sectigo, agrees that backward alerting of impacted agents \u201cis nowhere near to being fully solved at this time.\u201d\u00a0<\/p>\n<p>But he argues that agentic cybersecurity has inadvertently painted itself into a corner.\u00a0<\/p>\n<p>\u201cA lot of autonomous AI agent authentication will rely on a simple API token to verify itself.\u00a0 We have inadvertently built a weapon waiting for a stolen shared secret,\u201d Soroko says. \u201cTo fix this, we must move beyond shared secrets to cryptographic proof of possession, ensuring the agent verifies the \u2018who\u2019 behind the command, not just the \u2018concert wristband\u2019 authenticator.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Making the most of agentic AI is a top agenda item for many enterprises the coming year, as business executives are keen to deploy autonomous AI agents to revamp a range of business operations and workflows. The technology is nascent and, as with generative AI rollouts, CIOs are under pressure to move quickly with agentic [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":6310,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-6324","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/6324"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6324"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/6324\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/6310"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6324"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6324"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6324"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}