Agentic AI is both boon and bane for security pros

Tags:

Cybersecurity stands at a crossroads with agentic AI. Never have we had such a powerful tool that can create reams of code in a blink of an eye, find and defuse threats, and be used so decisively and defensively. This has proved to be a huge force multiplier and productivity boon.

But while powerful, agentic AI isn’t dependable, and that is the conundrum. The code created can contain subtle flaws and ultimately do more harm than good by boosting phishing lures and building new forms of malware. Gartner predicts AI agents will reduce the time it takes to exploit accounts by 50% by 2027. Another report found that 40% of those enterprises surveyed have experienced an AI-related security breach in the past year. For example, the Activision breach of 2022 began with a series of AI-enhanced SMS phishing messages. Kela reported in its 2025 AI threat report a 200% increase in AI malicious tools over 2024.

Malware is getting more complex, and its creators are getting better at hiding their craft so that they can live longer inside corporate networks and do more targeted damage. Adversaries are moving away from “spray and pray,” where they just blanket the globe with malware and towards “target and stay,” where they are more selective and parsimonious with their attacks. This change in focus is aided and abetted with agentic AI. These agents can be used to sift through a target to find a weak endpoint that can be compromised by malware or use that endpoint to steal data or launch a ransomware attack or provide information that could be used to initiate a social engineering campaign against an executive. In the past these sorts of operations took time and skill and manual effort, all of which can be shortened by agentic AI.

These and other data points show the dark underbelly where the agentic boon has turned into a bane and created more work for security defenders. “For almost all situations, agentic AI technology requires high levels of permissions, rights, and privileges in order to operate. I recommend that security leaders should consider the privacy, security, ownership, and risk any agentic AI deployment may have on your infrastructure,” said Morey Haber, chief security advisor at BeyondTrust.

What is agentic AI?

Generative AI agents are described by analyst Jeremiah Owyang as “autonomous software systems that can perceive their environment, make decisions, and take actions to achieve a specific goal, often with the ability to learn and adapt over time.” Agentic AI takes this a step further by coordinating groups of agents autonomously with a series of customized integrations to databases, models, and other software. These connections enable the agents to adapt dynamically to their circumstances and have more contextual awareness, or coordinate actions among multiple agents. Google’s threat intel team has loads of specific examples of current AI-fed abuses in a recent report.

But trusting security tools isn’t anything new. When network packet analyzers were first introduced, they did reveal intrusions but also were used to find vulnerable servers. Firewalls and VPNs can segregate and isolate traffic but can also be leveraged to allow hackers access and lateral network movement. Backdoors can be built for both good and evil purposes. But never have these older tools been so superlatively good and bad at the same time. In the rush to develop agentic AI, the potential of future misery was also created.

Recent agentic security signposts

Recently, we have seen numerous examples of how quickly building your own autonomous AI agents has taken root. Microsoft last month demonstrated six new AI agents that work with its Copilot software that talk directly to its various security tools to identify vulnerabilities, flag identity and asset compromises. Simbian is hosting this month an AI-based capture the flag contest, where the operating environment is an AI-fueled SOC where agents have already processed a series of alerts. A similar contest was first held at the 2023 DEFCON conference. The human participants must figure out which alerts are real. And in another sobering example, the company ZeroEyes has produced agentic tools to quickly scan thousands of security CCTV images per second to find firearms to aid law enforcement activities.

One of AWS’ CISOs, Chris Betz, spoke to CSO about how they have developed various AI agents that have saved them countless hours of manual labor, such as updating tens of thousands of legacy Java applications to the latest versions. “We found that 79% of the agent-produced code didn’t require any changes, and most of the remaining issues took less than a few hours to fix.” AWS also has used AI agents to transform .Net code into Linux and convert mainframe and VMware apps. “We got a four-time performance improvement of our workloads too,” Betz, said.

Tools and tips for defenders

There are several tools and strategies that security professionals can use to combat agentic threats and use them for good rather than evil purposes.

Earlier this year, OWASP posted its comprehensive report on agentic AI threats to provide a practical and actionable reference guide on how to identify and mitigate them. It describes a reference agentic architecture, delineates a variety of agentic patterns (such as agents that can critique their own outputs reflectively, or have specific tasks and objectives). The report also describes the threat modeling approach employed by the Cloud Security Alliance’s Maestro methodology and framework to bring more clarity and understanding of agentic operations.

The OWASP authors made a salient point: “Both white hat and black hat hackers typically learn by doing, and our app-centric world offers ample opportunities for them to hone their skills” as the increase in agentic attacks continues. Still, “attack rates on apps have reached unprecedented levels, with 82.7% of apps monitored by Digital.ai experiencing attacks in January 2025.”

Another good starting point to understand the differences among various agents can be found in this “blueprint for AI agents” by Dylan Williams, a security analyst with Appian. He shows how agents can work in various places in the security spectrum, including alerts and threat hunting, and reviews a variety of current common agent construction frameworks.

Other guidelines can be found in the AI Integrity and Safe Use Foundation’s Helen Oakley, including:

Strong data governance is vital, with robust access controls and high-quality, unbiased datasets.

Decision logs should be incorporated to ensure transparency and accountability.

Encrypted communication protocols between agents are needed to prevent interception or manipulation.

AWS’ Betz has some lessons learned from their agentic experience, including:

Use authentication and authorization to isolate and separate the foundational model operations from the agents.

Agents should treat output as untrusted code and perform the typical things such as syntax checks and rule validation.

All AI-generated code should initially operate in a sandbox to make sure it is working properly.

Understand how the agent generates its code: observability matters.

Test both with automated and manual methods, including doing red team exercises.

One place to consider implementing agentic AI is within your SOC. Given the average SOC receives hundreds if not thousands of daily alerts, agents can be useful to automate threat investigations, playbook creation, remediation and filter unimportant threats. Several security vendors offer these tools including Dropzone, D3Security, Radiant Security, Securiti and Torq.

Questions to ask when considering agentic AI

Here are a few questions prospective agentic AI buyers should consider in their evaluation of this new technology:

Examine the underlying built-in reasoning capabilities of any agent and understand how it works.

Do you need agentic processing of non-textual inputs, such as images, video and sounds?

Does your agent make use of multiple LLMs or development frameworks, and how do these interact with each other?

What authentication is used to verify users, tools or services, and how solid is this?

Can the agent process sensitive information or personally identifiable information?

AI strategist and book author Kate O’Neill told CSO, “Security still comes down to end user behavior, how you articulate your policies, and how you understand both how AI agents function and the risks involved, and what productivity gains you can realize.”

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *