{"id":5109,"date":"2025-09-29T07:00:00","date_gmt":"2025-09-29T07:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=5109"},"modified":"2025-09-29T07:00:00","modified_gmt":"2025-09-29T07:00:00","slug":"agentic-ai-in-it-security-where-expectations-meet-reality","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=5109","title":{"rendered":"Agentic AI in IT security: Where expectations meet reality"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p><a href=\"https:\/\/www.csoonline.com\/article\/3574697\/beyond-chatgpt-the-rise-of-agentic-ai-and-its-implications-for-security.html?\">Agentic AI<\/a> has quickly shifted from lab demos to real-world <a href=\"https:\/\/www.csoonline.com\/article\/3840447\/security-operations-centers-are-fundamental-to-cybersecurity-heres-how-to-build-one.html\">security operations centers (SOC)<\/a> deployments. Unlike traditional automation scripts, autonomous software agents are designed to act on signals and execute security workflows intelligently, correlating logs, enriching alerts, and even take first-line containment actions.<\/p>\n<p>For some security leaders, the value of agentic AI in the SOC is obvious: freeing analysts from endless triage and scaling response capacity in the face of overwhelming alert volume. For others, the risks of opaque decision-making, integration complexity, and spiraling costs loom large.<\/p>\n<h5 class=\"wp-block-heading\"><strong>[ Related: <\/strong><a href=\"https:\/\/www.computerworld.com\/article\/3843138\/agentic-ai-ongoing-coverage-of-its-impact-on-the-enterprise.html\"><strong>Agentic AI \u2013 Ongoing news and insights<\/strong><\/a><strong> ]<\/strong><\/h5>\n<p>To get a clear view of where the technology stands today, we spoke with security executives, product leaders, and researchers who are piloting, deploying, or advising on agentic AI for cybersecurity. Their perspectives highlight what agents do well \u2014 and where they stumble \u2014 as well as the organizational changes, pricing experiments, and governance models that will shape whether agentic AI becomes a staple of IT security or a short-lived trend.<\/p>\n<h2 class=\"wp-block-heading\">What agentic AI is (and isn\u2019t) good at<\/h2>\n<p><a href=\"https:\/\/www.computerworld.com\/article\/3617392\/what-are-ai-agents-and-why-are-they-now-so-pervasive.html\">Agentic AI<\/a> has carved out a niche performing tasks typically handled by tier one <a href=\"https:\/\/www.csoonline.com\/article\/569239\/soc-analyst-job-description-salary-and-certification.html\">security analysts<\/a>. Instead of simply flagging behavior to be reviewed, agent-based systems \u201chandle first-level tasks, like triaging alerts, correlating signals across tools, and in some cases even taking steps to contain a threat, like isolating an endpoint, allowing analysts to focus on other strategic and more important tasks,\u201d says <a href=\"http:\/\/linkedin.com\/in\/garini\">Jonathan Garini<\/a>, CEO and enterprise AI strategist at fifthelement.ai.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/vinod-goje\/\">Vinod Goje<\/a>, a data-driven solutions and applied AI expert, notes that in an SOC environment, AI agents operate \u201cmuch like digital tier-one analysts, sifting through data, gathering contextual information, and even producing detailed reports on their activities.\u201d Goje points to practical uses of AI agents in malware examination, script deobfuscation, and coordinating tools.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/itayglickcyber\/\">Itay Glick<\/a>, VP of products at OPSWAT, adds that agents \u201care good at the \u2018first 15 minutes\u2019 with pulling context, checking threat intel, summarizing logs, and proposing actions for review.\u201d They also help with exposure management by prioritizing vulnerabilities and with hygiene tasks like spotting stale accounts.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/diptochakravarty\/\">Dipto Chakravarty<\/a>, chief product and technology officer at Black Duck, notes that AI agents reduce alert fatigue by clustering alert patterns and correlating them with threat intel feeds, while natural language processing (NLP)-driven tools summarize alerts at scale.<\/p>\n<p>A common theme among many who have used AI agents is that they can free human analysts from \u201cthe repeatable grind,\u201d as Garini says, so they can concentrate on higher-level exploration and threat hunting. Agent-enabled teams also see \u201cquicker response, more streamlined team structures, and greater resilience in handling the overwhelming number of alerts,\u201d according to Goje.<\/p>\n<p>Still, limits remain. Glick warns that without clean data or clear playbooks, agents can chase noise or invent steps. Chakravarty points to problems with false positives and overfitting, while <a href=\"https:\/\/www.linkedin.com\/in\/prashant-jagwani-2675959\/?\">Prashant Jagwani<\/a>, SVP and global head of cybersecurity services at Mphasis, notes that ambiguous signals or multilayered context can still confound even the best-trained agents. For now, most organizations deploy them to augment rather than replace human analysts.<\/p>\n<h2 class=\"wp-block-heading\">Integration approaches: Add-on vs. standalone<\/h2>\n<p>The first decision regarding AI agents is whether to layer them onto existing platforms or to implement standalone frameworks. The add-on model treats agents as extensions to <a href=\"https:\/\/www.csoonline.com\/article\/524286\/what-is-siem-security-information-and-event-management-explained.html\">security information and event management (SIEM)<\/a>, <a href=\"https:\/\/www.csoonline.com\/article\/571201\/soar-the-smart-response-to-rising-security-threats.html\">security orchestration, automation and response (SOAR)<\/a>, or other security tools, providing quick wins with minimal disruption. Standalone frameworks, by contrast, act as independent orchestration layers, offering more flexibility but also requiring heavier governance, integration, and change management.<\/p>\n<p>Fifthelement.ai\u2019s Garini says these systems \u201care only as good as their interfaces.\u201d Out-of-the-box add-ins tend to be most effective when built directly on top of SIEM or SOAR platforms, he says, while standalone frameworks \u201coften require a larger lift for orchestration and governance.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/amitweigman\/\">Amit Weigman<\/a>, cybersecurity and AI expert at Checkpoint, points to current industry practice, noting that \u201cMicrosoft\u2019s Security Copilot \u2026 is helping analysts auto-triage alerts and cut through the noise. CrowdStrike is <a href=\"https:\/\/www.csoonline.com\/article\/4057472\/crowdstrike-bets-big-on-agentic-ai-with-new-offerings-after-290m-onum-buy.html\">doing something similar<\/a>, and Google\u2019s got Gemini-powered agents that can actually investigate alerts end-to-end. That\u2019s where we\u2019re mostly at right now: bolt-ons and extensions.\u201d<\/p>\n<p>Weigman notes that one reason bolt-ons are popular is that replacing or deeply integrating a new SOC platform is a big undertaking: \u201cIt can take months of deployment, retraining, and process changes. And all the while, the team is still fighting off live threats.\u201d<\/p>\n<p><a href=\"https:\/\/mindgard.ai\/authors\/fergal-glynn\">Fergal Glynn<\/a>, chief marketing officer and AI Security Advocate of Mindgard, frames the choice as a tradeoff between speed and flexibility. \u201cAdd-ins may work well for quick adoption, but they are less dynamic,\u201d he says. \u201cStandalone systems give better control, but they may require more setup and maintenance.\u201d<\/p>\n<p>OPSWAT\u2019s Glick agrees, describing a \u201crule of thumb\u201d where add-ins fit if most of the data lives in existing SIEM\/SOAR pipelines, while a dedicated agent layer works better and \u201chelps cut swivel-chairing\u201d when you need to deal with data that\u2019s scattered across IT, OT, <a href=\"https:\/\/www.infoworld.com\/article\/2238873\/what-is-cloud-computing.html\">cloud<\/a>, and <a href=\"https:\/\/www.infoworld.com\/article\/2256637\/what-is-saas-software-as-a-service-defined.html\">software-as-a-service (SaaS)<\/a>.<\/p>\n<p>Mphasis\u2019s Jagwani notes that most enterprises start with add-ins because they can be layered onto existing investments and provide a controlled test environment. Standalone frameworks, he says, usually come later, when organizations are ready to centralize across hybrid or multicloud estates.<\/p>\n<p>\u201cOne lesson we have drawn from client engagements,\u201d Jagwani says, \u201cis that many SOCs underestimate integration complexity. It is not only about APIs connecting systems. It is also about aligning the agent\u2019s decision logic with existing playbooks and risk tolerances. Add-in approaches provide a gentler path to that alignment, while standalone orchestration is often a second-phase maturity step.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Governance and organizational change<\/h2>\n<p>Agentic AI adoption rarely happens overnight. As Checkpoint\u2019s Weigman puts it, \u201cMost security teams aren\u2019t swapping out their whole SOC for some shiny new AI system, and one can understand that: It\u2019s expensive, and it demands time and human effort, which at the end of the day could appear be too disruptive and costly.\u201d<\/p>\n<p>Instead, leaders look for ways to incrementally layer new capabilities without jeopardizing ongoing operations, which makes pilots a common first step.<\/p>\n<p>\u201cMy first tip for organizations looking into agentic AI: Start with a smaller use case on a pilot basis, such as phishing response or credential abuse, before scaling up to broader detection and response,\u201d says fifthelement.ai\u2019s Garini. Targeting contained scenarios helps teams test value and reliability before making wider changes.<\/p>\n<p>Once agents are in place, governance must evolve. OPSWAT\u2019s Glick notes that teams don\u2019t throw out existing frameworks so much as adapt them: \u201cExisting change-control and segregation-of-duties rules get mapped into the agent flow \u2014 e.g., two-person sign-off for destructive actions, risk tiers that decide what\u2019s auto vs. ask vs. escalate, and sandboxes to test playbooks before rollout.\u201d He adds that agents are now <a href=\"https:\/\/www.csoonline.com\/article\/4029862\/how-ai-red-teams-find-hidden-flaws-before-attackers-do.html\">included in red team testing<\/a> through prompt injections and jailbreak attempts. \u201cThe ladder stays the same,\u201d he says. \u201cIt just gets made explicit in the agent\u2019s world.\u201d<\/p>\n<p>Mphasis\u2019s Jagwani sees a similar pattern. Governance and risk controls are extended through \u201chuman in the loop\u201d approvals rather than rewritten from scratch. Replacing regulatory frameworks, he argues, won\u2019t be practical until AI reaches a more advanced level of general intelligence.<\/p>\n<h2 class=\"wp-block-heading\">Trust, oversight, and human collaboration<\/h2>\n<p>The promise of agentic AI is autonomy, but that quality is also a real barrier to adoption: Many organizations are reluctant to let agents operate freely in production environments.<\/p>\n<p>\u201cAn agent designed to carry out a sequence of actions in response to a threat could inadvertently create new risks if misused or deployed inappropriately,\u201d says Goje. \u201cFor instance, there\u2019s potential for unregulated scripts or newly discovered vulnerabilities.\u201d As a result, most organizations are unwilling to permit fully autonomous operation without strong safeguards, he says.<\/p>\n<p>Checkpoint\u2019s Weigman frames the issue as one of transparency. \u201cAI still feels like a bit of a black box,\u201d he says. \u201cWith human analysts, mistakes aren\u2019t necessarily better, but managers know the error range and can put a dollar value on it. With AI, it\u2019s more like, \u2018We don\u2019t know what we don\u2019t know,\u2019 and that makes people understandably nervous.\u201d<\/p>\n<p>To overcome this challenge, experts emphasize building visibility and accountability into AI workflows. OPSWAT\u2019s Glick argues that \u201cyou need an audit trail for everything,\u201d from prompts and tool calls to outputs and approvals.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/kyle-kurdziolek-175923124\/\">Kyle Kurdziolek<\/a>, VP of security at BigID, adds that documentation is essential: \u201cAnything that\u2019s regulated needs to be documented, validated, and ultimately auditable. While it\u2019s important to see \u2018what\u2019 it did, we also have to construct the rationale as to \u2018why\u2019 it took specific actions.\u201d<\/p>\n<p>Mphasis\u2019s Jagwani emphasizes that regulators in financial services expect \u201cexplainability in a form that is auditable. This means outputs cannot simply be black-box recommendations. Teams are beginning to implement layered audit trails, where an agent\u2019s decision can be decomposed into inputs, confidence scores, and escalation logic.\u201d<\/p>\n<p>Even with more transparency, human collaboration remains critical. Goje suggests treating agents as \u201ccollaborative digital allies,\u201d and Weigman notes that most adopters are \u201ckeeping humans firmly in the loop for higher-risk actions, so the AI can recommend or triage, but the final call sits with an analyst.\u201d<\/p>\n<p>Weigman also says that deploying narrow, specialized agents can aid with visibility. \u201cInstead of one giant opaque AI brain, you\u2019ve got a collection of specialized agents, each with a narrow scope you can monitor and explain,\u201d he says.<\/p>\n<h3><strong> Training the next generation<\/strong><\/h3>\n<p>&gt;If AI agents take on the work of tier-one agents, how do new SOC team members learn the ropes? Vinod Goje is optimistic.<br \/>\n&gt;Tier one analyst work has traditionally been the training ground of security careers. The paradox of agentic AI is that while it relieves humans of repetitive triage, it also risks eroding the very \u201cmuscle memory\u201d new analysts used to build by grinding through alerts.<br \/>\n&gt; But much of the rote triage, such as filtering out obvious false positives, cutting through duplicate alerts, and escalating routine phishing cases, teaches analysts little more than patience. AI excels at handling these menial tasks, allowing human analysts to focus on more complex challenges.<br \/>\n&gt; That transforms tier-one from grunt work into guided training ground: Instead of drowning in noise, new analysts study curated, AI-documented cases and learn by interrogating the agent\u2019s rationale. So yes, if left unchecked, agentic AI could create a talent-pipeline gap. But used deliberately, it can actually accelerate skill development. <\/p>\n<h2 class=\"wp-block-heading\">Pricing, value, and program design<\/h2>\n<p>Agentic AI capabilities and governance are important, of course, but one of the biggest drivers for adopting agentic AI in security comes down to economics. Security leaders want to know: How much money and time does this save us? The answer is not always straightforward.<\/p>\n<p>\u201cPricing remains a friction point,\u201d says Fifthelement.ai\u2019s Garini. \u201cVendors are playing with usage-based models, but organizations are finding value when they tie spend to analyst hours saved rather than raw compute or API calls.\u201d<\/p>\n<p>Mindgard\u2019s Glynn notes the <a href=\"https:\/\/www.cio.com\/article\/4046457\/vendor-pricing-experiments-leave-cios-ai-costs-in-flux.html\">variability in AI pricing models<\/a> available today. \u201cA charge can be per subscription, per seat, or per alert. Other vendors may offer usage-based plans, too,\u201d he says. \u201cAdvanced agent systems are usually costly as they have wider impact and opportunity of savings on analyst workloads.\u201d<\/p>\n<p>OPSWAT\u2019s Itay Glick has seen teams experiment with \u201cper seat, per task, or hybrid\u201d models, but warns that \u201chidden expenses like storage, API fees, long prompts, and playbook upkeep add up fast.\u201d ROI, he argues, should ultimately show up in metrics like faster detection and response, more cases closed per analyst, and fewer junk alerts.<\/p>\n<p>Black Duck\u2019s Chakravarty says teams are \u201cspanning a full spectrum of experimentation,\u201d with usage-based and hybrid models still evolving. Organizations need to budget not only for the software, but also for the hybrid infrastructure costs of running large models both on-prem and in the cloud, he says.<\/p>\n<p>Mphasis\u2019s Jagwani cautions that simplistic pricing metrics often miss the point: \u201cHidden costs typically show up in areas like retraining models on domain-specific data or building pipelines for clean, structured telemetry.\u201d The best ROI, he says, comes when agents are seen as part of a long-term redesign of processes rather than just another plug-in.<\/p>\n<p>BigID\u2019s Kurdziolek believes there isn\u2019t one right way to measure ROI. \u201cEvery organization is different,\u201d he says. \u201cSome organizations look at it from an efficiency perspective: How many true positive or false positive events are we seeing from the agent? How many incidents have been raised by the agent? Some look at it from a resource perspective: How much time are we saving triaging alerts? How often are we double-checking the agent\u2019s output, and are we truly gaining time savings?\u201d<\/p>\n<p>As he sees it, the key question is simple: Are agents saving enough time in triage and investigation to let security teams build greater capacity to secure the enterprise?<\/p>\n<p>That question will likely shape the future of agentic AI in cybersecurity. The technology is maturing fast, but its staying power will depend on whether organizations see it as a sustainable way to reimagine how SOCs operate.<\/p>\n<h3><strong> Agentic AI brings risks along with promise<\/strong><\/h3>\n<p>&gt; While agentic AI offers new possibilities for automation,  the technology also brings inherent risk factors, according to a July report from Gartner entitled, \u201cEmerging Tech: The Future of Agentic AI in Enterprise Applications.\u201d<\/p>\n<p>&gt;<br \/>\n <strong>Security and compliance: <\/strong> Adopting agentic AI without fully understanding it can lead to project failure and threaten security and compliance. Because AI agents can operate with autonomy, they must be \u201csecure by design\u201d \u2014 that is, they must be built with security in mind from the ground up. Without safeguards in place, agentic systems could take actions that violate legal and regulatory frameworks.<br \/>\n <strong>Integration complexity: <\/strong>  Beyond connecting APIs, the challenge of integrating agentic AI in your workflow is aligning decision-making logic with a company\u2019s strategy and risk tolerances. This can be particularly complex due to a lack of standardization and interoperability protocols. <\/p>\n<p> <strong>Trust and governance: <\/strong>  The \u201cblack box\u201d nature of agentic AI creates major barriers. Without transparency and explainability, it is difficult to audit an agent\u2019s decisions. Gartner suggests implementing human-in-the-loop controls and audit trails to ensure that agents operate within safe and ethical boundaries and that human remain accountable for high-stakes decisions.<br \/>\n <strong>Evolving skills: <\/strong> Enterprises risk creating a talent-pipeline gap for new employees who traditionally have learned by performing the low-value tasks now delegated to AI agents. Gartner suggests this can be mitigated if new employees are taught to govern and work with agents rather than performing repetitive tasks themselves.<\/p>\n<p>&gt;\u2014 Dan Muse<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Agentic AI has quickly shifted from lab demos to real-world security operations centers (SOC) deployments. Unlike traditional automation scripts, autonomous software agents are designed to act on signals and execute security workflows intelligently, correlating logs, enriching alerts, and even take first-line containment actions. For some security leaders, the value of agentic AI in the SOC [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":5090,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-5109","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5109"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5109"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5109\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/5090"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5109"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5109"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5109"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}