The hottest topic at this year’s Black Hat and DEF CON conferences was the meteoric emergence of artificial intelligence tools for both cyber adversaries and defenders, particularly the use of agentic AI to strengthen cybersecurity programs.
Although cyber defenders have relied on AI-like machine learning tools to automate tasks and find bugs for nearly 20 years, a new batch of AI systems and agentic AI tools powered by large language models (LLMs) has only recently burst onto the scene.
“It’s been described as a Cambrian explosion,” Jimmy Mesta, founder and CTO at RAD Security, tells CSO. “It’s not just an evolution. It’s a spawning of a new way we do work and even live in a lot of ways, beyond security. There’s never been anything like it.”
Experts say that while AI agents carry security risks, sometimes down to the semiconductor level, they also offer opportunities to automate tedious tasks to free up scarce security professionals to tackle bigger problems in a force multiplier effect. But they also warn that CISOs should proceed with caution and protect their organizations and data before allowing AI agents to roam autonomously through their networks.
What are AI agents?
Although artificial intelligence is now understood across society due to popular chatbots such as ChatGPT, agentic AI has yet to emerge with a commonly understood definition. IBM defines AI agents generically as “a system that autonomously performs tasks by designing workflows with available tools.”
But on a practical level, the definition of agentic AI is harder to pin down — and remains in flux. “What’s agentic AI?” Mesta asks. “Is it different than an LLM? Is it a chat interface? And I think the answer is, it’s not as definitive as maybe we would like because it does seem like everyone has a different definition.”
Most experts agree, however, that AI agents are self-contained code modules that can direct actions independently. Andres Riancho, cybersecurity researcher at Wiz, tells CSO, “The basic concept is that you are going to have an LLM that can decide to perform a task, that is then going to be executed through most likely an MCP,” or Model Context Protocol server, which acts as a bridge between AI models and various external tools and services.
Ben Seri, co-founder and CTO of Zafran Security, draws a parallel between the rise of AI agents and the rise of generative AI itself. “These are the tools that would enable this LLM to act like an analyst, like a mediator, like something of that nature,” he tells CSO. “It’s not that different in a way from generative AI where it started, where it’s a machine, you can give it a question, and it can give you an answer, but the difference is now it’s a process. It’s when you are taking an AI and LLM and you’re giving it agency or ability to perform some actions on its own.”
Trust, transparency, and moving slowly are crucial
Like all technologies, and perhaps more dramatically than most, agentic AI carries both risks and benefits. One obvious risk of AI agents is that, like most LLM models, they will hallucinate or make errors that could cause problems.
“If you want to remove or give agency to a platform tool to make decisions on your behalf, you have to gain a lot of trust in the system to make sure that it is acting in your best interest,” Seri says. “It can hallucinate, and you have to be vigilant in maintaining a chain of evidence between a conclusion that the system gave you and where it came from.”
Together with supply chain knowledge, it’s crucial to have transparency when using agentic AI technologies. “We emphasize that transparency is a big part of this,” Ian Riopel, CEO and co-founder of Root.io, tells CSO. “Everything that we publish or that gets shipped to our customers, they can go in and see the source code. They need to be able to see what’s changed and understand it. Security through obscurity is not a great approach.”
Another risk is that in the frenzied rush to incorporate AI agents, organizations might overlook fundamental security concerns.
“It’s new technology and people are moving fast to ship it and to innovate and to make new things,” Hillai Ben-Sasson, cloud security researcher at Wiz, says. “Everyone’s creating MCP servers for their services to have AI interact with them. But an MCP at the end of the day is the same thing as an API. [Don’t make] all the same mistakes that people made when they started creating APIs ten years ago. All these authentication problems and tokens, everything that’s just API security.”
Agentic AI can be a game-changer
Despite what many consider to be hype surrounding the advent of AI, experts say that if implemented with deliberateness and due diligence, the benefits of AI agents are game-changing for cybersecurity.
AI agents are “the future,” Wiz’s Ben-Sasson says. “However, given that the current stage of AI development is still immature, AI agents might, like a junior engineer, make a lot of mistakes. That’s why we have different permission sets. That’s why we have guardrails and so on.”
The real benefit of AI agents is that they can tackle the boring but necessary tasks of cybersecurity to free up talent to take on more complex tasks, thereby accelerating security programs and becoming a workforce multiplier.
“We did a bake-off of gen three of some of our agents against one of our best security researchers to create a backported patch for a critical vulnerability on a very popular piece of open-source software,” Root.io’s Riopel says. “And that researcher took eight days to create a patch that otherwise wasn’t available. It required modifying 17 different snippets of code across three different software commits. The AI agents did it in under 15 minutes. When you think about that, that’s not 10x multiplier, it’s 1,000x.”
Force multiplication means skill set shifts, not job losses
Despite the potential for cutting out tasks that many security analysts perform today, agentic AI will likely not reduce the size of the current cybersecurity workforce. “No one’s getting fired in lieu of agents,” Riopel says.
“I think we are going through a skill set shift, and I wouldn’t call it an all-out replacement,” RAD Security’s Mesta says. “What AI is going to do is impact the kind of lower-level paper shuffling style jobs where I had a CSV report, I’m going to put it in Excel, and I’m going to create a ticket,” Mesta adds.
But, he says, “it will unlock extreme productivity for security teams for those who know how to use it, which is, I think, the big asterisk. If you’re anti-AI and that’s not a skill you think should be in your toolbox, it’s going to be challenging going forward to maintain the same level of job seniority you have now.”
Zafran Security’s Seri thinks it’s wrong to say that the advent of AI agents means we will now need fewer cybersecurity experts. “We will need more of them,” he says. “There is an opportunity with these tools to automate and to make your life easier, but it’s not to replace the expertise that people accumulate over time.”
How CISOs should proceed in deploying AI agents
All experts say that the deployment of AI agents inside organizations is a done deal and will arrive faster than any other technology shift, including the adoption of cloud computing. “This train has not only left the station; it’s a bullet train,” Mesta says. “It’s like the fastest train ever made.”
CISOs need to immediately strap in and grapple with the implications of a technology that they do not always fully control, if for no other reason than their team members will likely turn to AI platforms to develop their security solutions. “Saying no doesn’t work. You have to say yes with guardrails,” says Mesta.
At this still nascent stage of agentic AI, CISOs should ask questions, Riopel says. But he stresses that the main “question you should be asking is: How can I force multiply the output or the effectiveness of my team in a very short period of time? And by a short period of time, it’s not months; it should be days. That is the type of return that our customers, even in enterprise-type environments, are seeing.”
Not everyone agrees that pursuing compressed timeframes is the right strategy. “In many cases, from the CISO perspective, the takeaway here is that the agentic AI services that they are using are still immature,” Wiz’s Riancho says. “It’s still an immature industry. We need years of security improvements to make sure that everything is more stable and secure for companies and end users.”
But Riancho also thinks CISOs should be asking a lot of questions now. “I would ask difficult questions. So, before actually connecting an agent to my endpoint devices, to my infrastructure, to my SOC, to anything, ask the difficult question: Which actions are going to be performed by these agents?”
One critical question that CISOs should be asking is what happens to their organizations’ information once it has been fed into any given vendor’s agentic AI product.
“I don’t want my data to go to other vendors like OpenAI or Anthropic or anybody else that is not the security vendor,” Zafran Security’s Seri says. “This is fundamental: Make sure that the data that you are sharing is not driving around the world and seeing the sights.”
No Responses