{"id":1525,"date":"2025-01-15T13:00:00","date_gmt":"2025-01-15T13:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=1525"},"modified":"2025-01-15T13:00:00","modified_gmt":"2025-01-15T13:00:00","slug":"ciscos-homegrown-ai-to-help-enterprises-navigate-ai-adoption","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=1525","title":{"rendered":"Cisco\u2019s homegrown AI to help enterprises navigate AI adoption"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>As the world rushes to integrate AI into all aspects of enterprise applications, there\u2019s a pressing need to secure data-absorbing AI systems from malicious interferences.<\/p>\n<p>To achieve that, Cisco has announced Cisco AI Defense, a solution designed to address the risks introduced by the development, deployment, and usage of AI.<\/p>\n<p>According to Tom Gillis, SVP and GM of Cisco Security, the rapid integration of AI into business workflows, which should warrant \u201cmulti-year refactoring\u201d of applications to include AI features, is progressing faster than security teams can keep up, creating numerous vulnerabilities for attackers to exploit.<\/p>\n<p>\u201cAs this transition unfolds, we observe a few key trends,\u201d Gillis said. \u201cThe adoption leads to the bifurcation of emerging toolsets, offering developers a vast array of rapidly evolving options. Consequently, development teams move swiftly, while security teams, tasked with establishing boundaries around the developers\u2019 work, struggle to keep up and often lose track of it.\u201d<\/p>\n<p>Among other things, Gillis pointed out, Cisco AI defense will address this key issue of \u201cDiscovery\u201d by providing an inventory of all AI workloads, applications, models, data, and user access across distributed cloud environments.<\/p>\n<p>Cisco\u2019s AI Defense will integrate into its existing network visibility infrastructure consisting of firewalls, web proxies, and secure access gateways, to scan network traffic and identify all existing AI workflows.<\/p>\n<h2 class=\"wp-block-heading\">Proprietary AI for model validation<\/h2>\n<p>The second problem the new offering is trying to address is the need to shift security practices as AI-infused systems become mainstream.<\/p>\n<p>\u201cThe thing about AI is that it\u2019s just architecturally different,\u201d Gillis noted. \u201cIn a traditional application, you had three layers: the presentation layer (web layer), the application logic, and the data persistence layer. Data resided in the persistence layer, which, by definition, is persistent, while the middle layer did not retain any data.\u201d<\/p>\n<p>With AI, he added, a model is placed in the middle, and data is absorbed into this model. \u201cThe model retains and transforms the data, creating an entirely new layer in the stack that requires careful consideration and protection.\u201d<\/p>\n<p>To tackle this challenge, Cisco AI Defense will offer a new detection capability powered by its proprietary AI. It will perform \u201cmodel validation\u201d through exhaustive testing of model logic to identify any signs of compromise or poisoning.<\/p>\n<p>\u201cWe want to ensure that the data used for training is accurate and valid, with no malicious additions to the datasets,\u201d Gillis explained. \u201cAdditionally, we need to verify that the guardrails implemented in the model are functioning correctly and that the model is behaving as expected.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Protection at runtime<\/h2>\n<p>While Cisco AI Defense allows for defining guardrails for AI models, the proprietary technology will also enable security teams to implement these protections independently, without interfering with the developers\u2019 control over the models.<\/p>\n<p>\u201cWe dynamically calibrate and set guardrails for models before and during production,\u201d Gillis said. \u201cIn production, a monitoring system observes normal application behavior and detects abnormalities, such as prompt injection attacks, by flagging actions outside expected patterns.\u201d<\/p>\n<p>This runtime protection, Gillis emphasized, is independent and transparent to the AI model, and lives entirely in the \u201cnetwork.\u201d<\/p>\n<p>Gillis noted that most competing AI safety tools primarily focus on monitoring data exchange and performing <a href=\"https:\/\/www.csoonline.com\/article\/569559\/what-is-dlp-how-data-loss-prevention-software-works-and-why-you-need-it.html\">data loss prevention<\/a> (DLP), with their discovery phase generally limited to the straightforward identification of existing AI elements.<\/p>\n<p>\u201cThe key difference with Cisco AI Defense lies in understanding the application,\u201d he said. \u201cUnlike other AI safety tools, we conduct model validation and have the capability to enforce protections, such as preventing prompt injection attacks at runtime. Our proprietary models uniquely track application behavior and monitor for any drift.\u201d<\/p>\n<p>Other than prompt injection, the solution is targeted at protecting against data and model poisoning attacks. It will be generally available by the end of February through the Cisco Security Cloud.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>As the world rushes to integrate AI into all aspects of enterprise applications, there\u2019s a pressing need to secure data-absorbing AI systems from malicious interferences. To achieve that, Cisco has announced Cisco AI Defense, a solution designed to address the risks introduced by the development, deployment, and usage of AI. According to Tom Gillis, SVP [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":1526,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-1525","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/1525"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1525"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/1525\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/1526"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1525"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1525"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1525"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}