Agentic AI: A CISO’s security nightmare in the making?

Tags:

Enterprises will no doubt be using agentic AI for a growing number of workflows and processes, including software development, customer support automation, robotic process automation (RPA), and employee support. Among the key questions for CISOs and their staffs: What are the cybersecurity risks of agentic AI, and how much more work will it take for them to support their organizations’ agentic AI dreams?

In a 2024 report noting how threat actors could leverage AI in 2025, Cisco Talos said AI systems and models that can act autonomously to achieve goals without the need for constant human guidance could imperil organizations that are neither prepared nor equipped to handle agentic systems and their potential for compromise.

“As agentic systems increasingly integrate with disparate services and vendors, the opportunity for exploitation or vulnerability is ripe,” the report said. “Agentic systems may also have the potential to conduct multi-stage attacks, find creative ways to access restricted data systems, chain seemingly benign actions into harmful sequences, or learn to evade detection by network and system defenders.”

It’s clear that agentic AI will be a significant challenge for cybersecurity teams and that CISOs need to be part of the conversation as their organizations adopt the technology. This is especially true given that many companies have jumped into all aspects of AI without giving much thought to hardening their systems.

With enterprises embracing agentic AI to enhance efficiency, decision-making, and automation, “they must also confront a new class of cyber risks,” says Sean Joyce, global cybersecurity and privacy leader at consulting firm PwC. “Unlike traditional AI models that respond to direct prompts, agentic AI systems can act autonomously toward high-level goals, make decisions, interact with other systems, and adapt their behavior over time.”

For CISOs, “understanding and mitigating these emerging risks is critical to fostering safe and responsible AI adoption,” Joyce says.

Here are some of the key issues and risks involved.

Lack of visibility and the rise of shadow AI

CISOs don’t like operating in the dark, and this is one of the risks agentic AI brings. It can be deployed autonomously by teams or even individual users through a variety of applications without proper oversight from security and IT departments.

This creates “shadow AI agents” that can operate without controls such as authentication, which makes it difficult to track their actions and behavior. This in turn can pose significant security risks, because unseen agents can introduce vulnerabilities.

“A lack of visibility creates several risks for organizations, including security risks, governance/compliance issues, operational risks, and a lack of transparency [that] can lead to a loss of trust by employees, vendors, etc., and hinder AI adoption,” says Reena Richtermeyer, partner at CM Law who represents AI developers and large corporations and government entities implementing AI.

“The biggest issue we see truly is a lack of visibility,” says Wyatt Mayham, lead AI consultant at consulting firm Northwest AI. Agentic AI often starts on the edge where individuals are setting up ChatGPT and other tools to automate tasks, he says.

“And these agents aren’t sanctioned or reviewed by IT, which means they’re not logged, versioned, or governed,” Mayham says. CISOs are accustomed to shadow software-as-a-service (SaaS), he says, and now they need to contend with shadow AI behavior.

“Organizations frequently lack awareness of where these systems are implemented, who is using them, and the extent of their autonomy,” says Pablo Riboldi, CISO of BairesDev, a nearshore software development company. “This results in a significant shadow-IT issue, as security teams might remain oblivious to agents that are making real-time decisions or accessing sensitive systems without centralized control.”

Free agents: Autonomy breeds increased risks

Agentic AI introduces the ability to make independent decisions and act without human oversight. This capability presents its own cybersecurity risk by potentially leaving organizations vulnerable.

“Agentic AI systems are goal-driven and capable of making decisions without direct human approval,” Joyce says. “When objectives are poorly scoped or ambiguous, agents may act in ways that are misaligned with enterprise security or ethical standards.”

For example, if an agent is told to reduce “noise” in the security operations center, it might interpret this too literally and suppress valid alerts in its effort to streamline operations, leaving an organization blind to an active intrusion, Joyce says.

Agentic AI systems are designed to act independently, but without strong governance, this autonomy can quickly become a liability, Riboldi says. “A seemingly harmless agent given vague or poorly scoped instructions might overstep its boundaries, initiating workflows, altering data, or interacting with critical systems in unintended ways,” he says.

In an agentic AI environment, “there is a lot of autonomous action without oversight,” Mayham says. “Unlike traditional automation, agents make choices that could mean clicking links, sending emails, triggering workflows. And this is all based on probabalistic reasoning. When those choices go wrong it’s hard to construct why. We’ve seen [clients] of ours accidently exposing sensitive internal URLs by misunderstanding what safe-to-share means.”

Multi-agent systems: Unwanted data-sharing consequences

Multi-agented systems hold great promise for the enterprise, but AI agents interacting and sharing data with one another introduce risks related to security, privacy, and the potential for unintended consequences, CM Law’s Richtermeyer says. “These risks stem from the AI’s ability to access vast amounts of data, their autonomous nature, and the complexity of managing multi-agent AI systems,” she says.

For example, AI agents can access and process sensitive information that might be governed contractually or heavily regulated, leading to an authorized use or disclosure that creates potential liability for an organization, Richtermeyer says.

“As soon as you have a multi-agent setup, you introduce coordination risk,” Northwest AI’s Mayham says. “One agent might expand the scope of a task in a way another agent wasn’t trained to handle. Without sandboxing, this can lead to unpredictable system behavior, especially if the agents are ingesting fresh real-world data.”

Agents often collaborate with other agents to complete tasks, resulting in complex chains of communication and decision-making, PwC’s Joyce says. “These interactions can propagate sensitive data in unintended ways, creating compliance and security risks,” he says.

For example, a customer service agent summarizes account details for an internal agent handling retention analysis. That second agent stores the data in an unprotected location for later use, violating internal data handling policies.

Third-party integration: Supercharging supply-chain risks

Agents can also potentially integrate and share data with third-party partners’ applications via APIs, presenting yet another challenge for CISOs, as integration with disparate services and vendors can create increased opportunity for exploitation or vulnerability.

Agentic AI relies heavily on APIs and external integrations, Riboldi says. “As an agent gains access to more systems, its behavior becomes increasingly complex and unpredictable,” he says. “This scenario introduces supply chain risks, as a vulnerability in any third-party service could be exploited or inadvertently triggered through agentic interactions across different platforms.”

Many early stage agents rely on brittle or undocumented APIs or browser automation, Mayham says. “We’ve seen cases where agents leak tokens via poorly scoped integrations, or exfiltrate data through unexpected plugin chains. The more fragmented the vendor stack, the bigger the surface area for something like this to happen,” he says. “The AI coding tools are notorious for this.”

“Each integration point expands the attack surface and may introduce supply-chain vulnerabilities,” Joyce says. “For example,an AI agent integrates with a third-party HR platform to automate onboarding. The vendor’s API has a known vulnerability, which an attacker exploits to gain lateral access to internal HR systems.”

Many agentic tools rely on open-source libraries and orchestration frameworks, which might harbor vulnerabilities unknown to security teams, Joyce adds.

Multi-stage attacks: Blurring the line between error and exploitation

There is a potential for agentic systems to conduct multi-stage attacks and find new ways to access restricted data systems by evading detection by security tools.

“As agentic systems become more sophisticated, they may inadvertently develop or learn multi-step behaviors that mimic multi-stage attacks,” Riboldi says. “Worse, they might unintentionally discover ways to bypass traditional detection methods — not because they are malicious, but because their goal-oriented behavior rewards evasion.”

This blurs the line between error and exploitation, Riboldi says, and makes it harder for security teams to tell whether an incident was malicious, emergent behavior, or both.

This type of risk “is less theoretical than it sounds,” Mayham says. “In lab tests, we’ve seen agents chain tools together in unexpected ways, and not really maliciously but rather creatively really. Now imagine that same reasoning ability being exploited to probe systems, test endpoints, and avoid pattern-based detection tools.”

Because agentic AI can learn from feedback, it might alter its behavior to avoid triggering detection systems — intentionally or unintentionally, Joyce says. “This presents a serious challenge for traditional rule-based detection and response tools,” he says. An agent could determine that certain actions trigger alerts from an endpoint detection platform, and adjust its method to stay under detection thresholds, similar to how malware adapts to antivirus scans.

A new paradigm requires new defense models

Agentic AI represents a powerful new model, Joyce says, but also a radically different cybersecurity challenge. “Its autonomy, adaptability, and interconnectivity make it both a productivity multiplier and a potential attack vector,” he says. “For CISOs, traditional security models are no longer sufficient.”

According to Joyce, a robust agentic AI defense strategy must include the following fundamentals:

Real-time observability and telemetry

Tightly scoped governance policies

Secure-by-design development practices

Cross-functional coordination between security, IT, data management, and compliance teams

“By adopting a proactive, layered security approach and embedding governance from the start, organizations can safely harness the promise of agentic AI while minimizing the risks it brings,” he says.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *