Making the most of agentic AI is a top agenda item for many enterprises the coming year, as business executives are keen to deploy autonomous AI agents to revamp a range of business operations and workflows.
The technology is nascent and, as with generative AI rollouts, CIOs are under pressure to move quickly with agentic AI strategies — a potential nightmare in making for CISOs charged with ensuring organizational security in the face of widespread agentic experimentation and deployment.
A key area of concern is identity and authentication. Some security experts estimate that more than 95% of enterprises deploying or experimenting with autonomous agents are doing so without leveraging existing cybersecurity mechanisms — such as public key infrastructure (PKI) — to track, identify, and control their agents.
This issue becomes even more dangerous due to the prevalence of agent-to-agent communications common to agentic AI rollouts.
For agentic AI to work, AI agents must communicate autonomously with other agents to pass tasks, data, and context. Without sufficient identity management, authentication, and related cybersecurity measures in place, not only could an agent be controlled by a cybercriminal or state actor, but rogue agents could engage in a variety of prompt injection attacks with an unlimited number of legitimate agents.
should a hijacked agent communicate with an enterprise’s legitimate agents before it is detected and its credentials are pulled, the damage from legitimate agents following the rogue agent’s instructions isn’t halted.
And the likelihood of this knock-on effect isn’t trivial. Most robust authentication mechanisms today revoke and/or shut down credentials when bad behavior is detected. But behavioral analytics systems often need to witness acts of bad behavior before they can flag the problem to terminate the ID. Any actions previously initiated by the compromised agent will already be in motion across the agentic chain.
Having a trail of every interaction and an automated system for contacting all legitimate agents that interacted with the rogue agent to tell them to disregard instructions from that agent — and alert IT security of any actions already taken on the rogue’s instructions — is the goal, but vendors have yet to address this need. Moreover, many security experts argue it’s too complex a problem to easily solve.
“Because autonomous agents are increasingly able to execute real actions within an organization, if a malicious actor can affect the decision-making layer of an autonomous agent, the resulting damage could be exponentially greater than in a traditional breach scenario,”says Nik Kale, principal engineer at Cisco as well as a member of the Coalition for Secure AI (CoSAI) and ACM’s AI Security (AISec) program committee.
The ever-expanding attack surface of autonomous agents
“Because agents are programmed to follow instructions, they will likely follow a questionable instruction absent some mechanism to force the agent to slow its process to validate the safety of the request,” Kale says. “Humans have intuition and therefore often sense when something does not feel right. Agents do not possess this instinctual sense and thus will follow any request unless the system specifically prevents them from doing so.”
Gary Longsine, CEO at IllumineX, agrees that the cybersecurity risks from uncontrolled agentic deployment is unlike anything CISOs have faced.
“The attack surface of the AI agent could be thought of as essentially infinite, due to the natural language interface and the ability of the agent to summon a potentially vast array of other agentic systems,” Longsine says.
DigiCert CTO Jason Sabin suggests the situation may be even worse because of how relatively easy it is to perform an agent hijacking.
“Without robust agentic authentication, organizations risk deploying autonomous systems that can be hijacked with a single fake instruction,” Sabin claims.
Agentic AI’s identity crisis
Authentication and agentic experts interviewed — three of whom estimate that less than 5% of enterprises experimenting with autonomous agents have deployed agentic identity systems — say the reasons for this lack of security hardening are varied.
First, many of these efforts are effectively shadow IT, where a line of business (LOB) executive has authorized the proof of concept to see what these agents can do. In these cases, IT or cyber teams haven’t likely been involved, and so security hasn’t been a top priority for the POC.
Second, many executives — including third-party business partners handling supply chain, distribution, or manufacturing — have historically cut corners for POCs because they are traditionally confined to sandboxes isolated from the enterprise’s live environments.
But agentic systems don’t work that way. To test their capabilities, they typically need to be released into the general environment.
The proper way to proceed is for every agent in your environment — whether IT authorized, LOB launched, or that of a third party — to be tracked and controlled by PKI identities from agentic authentication vendors. Extreme defense would include instructing all authorized agents to refuse communication from any agent without full identification. Unfortunately, autonomous agents — like their gen AI cousins — often ignore instructions (aka guardrails).
“Agentic-friendly encounters conflict with essential security principles. Enterprises cannot risk scenarios where agents autonomously discover each other, establish communication channels, and form transactional relationships,” says Kanwar Preet Singh Sandhu, who tracks cybersecurity strategies for Tata Consultancy Services.
“When IT designs a system, its tasks and objectives should be clearly defined and restricted to those duties,” he adds. “While agent-to-agent encounters are technically possible, they pose serious risks to principles like least privilege and segregation of duties.For structured and planned collaboration or integration, organizations must follow stringent protocols such as MCP [Model Context Protocol] and A2A [Agent to Agent], which were created precisely for this purpose.”
DigiCert’s Sabin says his interactions with enterprises revealed “little to none” creating identities for their autonomous agents. “Definitely less than 10%, probably less than 5%. There is a huge gap in identity.”
Agentic IDs: Putting the genie back in the bottle
Once agentic experiments begin without proper identities established, it’s far more difficult to add identity authentication later, Sabin notes.
“How do we start adding in identity after the fact? They don’t have these processes established. The agent can and will be hijacked, compromised. You have to have a kill switch,” he says. “AI agents’ ability to verify who is issuing a command and whether that human/system has authority is one of the defining security issues of agentic AI.”
To address that issue, CISOs will likely need to rethink identity, authentication, and privilege.
“What is truly challenging about this is that we are no longer determining how a human authenticates to a system. We are now asked to determine how an autonomous agent determines that the individual providing instructions is legitimate and that the instructions are within the expected pattern of action,” Cisco’s Kale says. “The shift to determining legitimacy based on the autonomous agent’s assessment of the human’s intent, rather than simply identifying the human, introduces a whole new range of risk factors that were never anticipated by traditional authentication methods.”
Ishraq Khan, CEO of coding productivity tool vendor Kodezi, also believes CISOs are likely underestimating the security threats that exist within agentic AI systems.
“Traditional authentication frameworks assume static identities and predictable request patterns. Autonomous agents create a new category of risk because they initiate actions independently, escalate behavior based on memory, and form new communication pathways on their own. The threat surface becomes dynamic, not static,” Khan says. “When agents update their own internal state, learn from prior interactions, or modify their role within a workflow, their identity from a security perspective changes over time. Most organizations are not prepared for agents whose capabilities and behavior evolve after authentication.”
Khan adds: “A compromised agent can impersonate collaboration patterns, fabricate system state or manipulate other agents into cascading failures. This is not simply malware. It is a behavioral attack on decision-making.”
Harish Peri, SVP and general manager of AI Security at Okta, puts it more directly: “This is not just an NHI problem. This is a recipe for disaster. It is a new kind of identity, a new kind of relentless user.”
Regarding the problem of being unable to undo the damage when a hijacked agent gives malicious instructions to legitimate agents, Peri says it can be a challenging problem that no one seems to have solved yet.
“If the risk signal is strong enough, we do have the capability to revoke not just the privilege but the access token,” Peri says. But “the real-time kind of chaining requires more thought.”
Unwinding agent interactions will be a tall order
One issue is that tracking interactions for backward chaining will require a massive amount of data to be captured from every agent in the enterprise environment. And given that autonomous agents act at non-human speed, a data warehouse for that activity will likely fill up quickly.
“By the time the agent does something and identity gets revoked, all of the downstream agents have already interacted with that compromised agent. They have already accepted assignments and have already cued up its next step actions,” Cisco’s Kale explains. “There is no mechanism to propagate that revocation backwards. Kill switches are necessary but they are incomplete.”
The process to go backwards to all contacted agents “sounds like a straightforward script. It looks easy until you try and do it properly,” he says. “You need to know every instruction an agent has issued and the hard part is deciding what to undo” — a scenario Kale likens to alert fatigue. “This could absolutely collapse from its own weight. This could all become noise and not security at that point.”
Jason Soroko, a senior fellow at Sectigo, agrees that backward alerting of impacted agents “is nowhere near to being fully solved at this time.”
But he argues that agentic cybersecurity has inadvertently painted itself into a corner.
“A lot of autonomous AI agent authentication will rely on a simple API token to verify itself. We have inadvertently built a weapon waiting for a stolen shared secret,” Soroko says. “To fix this, we must move beyond shared secrets to cryptographic proof of possession, ensuring the agent verifies the ‘who’ behind the command, not just the ‘concert wristband’ authenticator.”
No Responses