{"id":3222,"date":"2025-05-19T08:01:00","date_gmt":"2025-05-19T08:01:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=3222"},"modified":"2025-05-19T08:01:00","modified_gmt":"2025-05-19T08:01:00","slug":"8-security-risks-overlooked-in-the-rush-to-implement-ai","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=3222","title":{"rendered":"8 security risks overlooked in the rush to implement AI"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>In their race to achieve productivity gains from generative AI, most organizations overlook the security implications of doing so, instead favoring hopes of game-changing innovations over sound security practices.<\/p>\n<p>According to a <a href=\"https:\/\/reports.weforum.org\/docs\/WEF_Global_Cybersecurity_Outlook_2025.pdf\">study from the World Economic Forum<\/a> conducted in collaboration with Accenture, 63% of enterprises fail to assess the security of AI tools before deployment, introducing a range of risks to their enterprise.<\/p>\n<p>That includes both off-the-shelf AI solutions and in-house implementations created in collaboration with software development teams that, according to <a href=\"https:\/\/www.tricentis.com\/blog\/quality-transformation-report-key-findings\">Tricentis\u2019 2025 Quality Transformation Report<\/a>, are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%) \u2014 even as a third (32%) of respondents to Tricentis\u2019 survey admit that poor quality software will likely result in more frequent security breaches or compliance failures.<\/p>\n<p>And more frequent those breaches and failures are. <a href=\"https:\/\/newsroom.cisco.com\/c\/r\/newsroom\/en\/us\/a\/y2025\/m05\/cybersecurity-readiness-index-2025.html\">Cisco\u2019s latest Cybersecurity Readiness Index<\/a>, published on May 7, found that 86% of organizations had experienced an AI-related security incident in the past year. Less than half (45%) believe their organization has the internal resources and expertise to conduct comprehensive AI security assessments.<\/p>\n<h2 class=\"wp-block-heading\">Most common overlooked AI security risks<\/h2>\n<p>The failure to adequately test AI systems before deployment exposes organizations to a range of vulnerabilities that differ significantly from traditional software risks, according to experts quizzed by CSO. Here are some of the most prevelant.<\/p>\n<h3 class=\"wp-block-heading\">Data exposure<\/h3>\n<p>AI systems often process large volumes of sensitive information, and without robust testing, organizations may overlook how easily this data can be leaked, either through unsecured storage, overly generous API responses, or poor access controls.<\/p>\n<p>\u201cMany AI systems ingest user data during inference or store context for session persistence,\u201d says <a href=\"https:\/\/mindgard.ai\/authors\/peter-garraghan\">Dr. Peter Garraghan<\/a>, chief executive and co-founder of AI security testing vendor Mindgard. \u201cIf data handling is not audited, there is a high risk of data leakage through model output, log exposure, or misuse of fine-tuned datasets. These risks are exacerbated by LLM [large language model] memory features or streaming output modes.\u201d<\/p>\n<h3 class=\"wp-block-heading\">Model-level vulnerabilities<\/h3>\n<p>These include <a href=\"https:\/\/www.csoonline.com\/article\/575497\/owasp-lists-10-most-critical-large-language-model-vulnerabilities.html\">prompt injection, jailbreaks, and adversarial prompt chaining<\/a>. Without rigorous testing, models can be manipulated to bypass output constraints, leak sensitive data, or perform unintended tasks.<\/p>\n<p>\u201cThese attacks often exploit flaws in the model\u2019s alignment mechanisms or its reliance on token-level reasoning,\u201d Garraghan, a lecturer at the UK\u2019s Lancaster University, explained.<\/p>\n<h3 class=\"wp-block-heading\">Model integrity and adversarial attacks<\/h3>\n<p>Without testing for adversarial manipulation or poisoned training data, it\u2019s easy for attackers to <a href=\"https:\/\/www.csoonline.com\/article\/2139630\/ai-system-poisoning-is-a-growing-threat-is-your-security-regime-ready.html\">influence how an AI model behaves<\/a>, especially if it\u2019s being used to support business decisions or automate sensitive tasks.<\/p>\n<p><a href=\"https:\/\/uk.linkedin.com\/in\/jano-bermudes-cyxcel\">Jano Bermudes<\/a>, COO of the global cyber consultancy CyXcel, says: \u201cAttackers can manipulate input data to deceive AI models, causing them to make incorrect decisions. This includes evasion attacks and data poisoning.\u201d<\/p>\n<h3 class=\"wp-block-heading\">Systemic integration risks<\/h3>\n<p>AI models are frequently deployed as part of larger application pipelines, such as through APIs, plugins, or <strong><a href=\"https:\/\/www.infoworld.com\/article\/2335814\/what-is-retrieval-augmented-generation-more-accurate-and-reliable-llms.html\">retrieval-augmented generation (RAG)<\/a><\/strong> architectures.<\/p>\n<p>\u201cInsufficient testing at this level can lead to insecure handling of model inputs and outputs, injection pathways through serialized data formats, and privilege escalation within the hosting environment,\u201d Mindgard\u2019s Garraghan says. \u201cThese integration points are frequently overlooked in conventional AppSec [application security] workflows.\u201d<\/p>\n<h3 class=\"wp-block-heading\">Access control failures<\/h3>\n<p>AI tools often plug into wider systems and, if misconfigured, can give users or attackers more access than intended. This might include exposed API keys, poor authentication, or insufficient logging that makes it hard to spot abuse.<\/p>\n<h3 class=\"wp-block-heading\">Runtime security failures<\/h3>\n<p>AI systems may exhibit emergent behaviors only during deployment, especially when operating under dynamic input conditions or interacting with other services.<\/p>\n<p>\u201cVulnerabilities such as logic corruption, context overflow, or output reflection often appear only during runtime and require operational red-teaming or live traffic simulation to detect,\u201d according to Garraghan.<\/p>\n<h3 class=\"wp-block-heading\">Compliance violations<\/h3>\n<p>Failing to ensure AI tools meet regulatory standards can lead to legal repercussions.<\/p>\n<p>For example, regulatory violations might occur due to unauthorized data processing by AI tools or outages from untested model behaviours under scale.<\/p>\n<h3 class=\"wp-block-heading\">Broader operational impacts<\/h3>\n<p>\u201cThese technical vulnerabilities, if left untested, do not exist in isolation,\u201d Mindgard\u2019s Garraghan says. \u201cThey manifest as broader organizational risks that span beyond the engineering domain. When viewed through the lens of operational impact, the consequences of insufficient AI security testing map directly to failures in safety, security, and business assurance.\u201d<\/p>\n<p><a href=\"https:\/\/www.isms.online\/author\/sam-peters\/\">Sam Peters<\/a>, chief product officer at compliance experts ISMS.online, sees widespread operational impacts from organziations\u2019 tendency to overlook proper AI security vetting.<\/p>\n<p>\u201cWhen AI systems are rushed into production, we see recurring vulnerabilities across three key areas: model integrity (including poisoning and evasion attacks), data privacy (such as training data leakage or mishandled sensitive data), and governance gaps (from lack of transparency to poor access control),\u201d he says.<\/p>\n<p>Peters adds: \u201cThese issues aren\u2019t hypothetical; they\u2019re already being exploited in the wild.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Test against delivery<\/h2>\n<p>The <a href=\"https:\/\/www.csoonline.com\/article\/3529615\/companies-skip-security-hardening-in-rush-to-adopt-ai.html\">rush to implement AI<\/a> puts <a href=\"https:\/\/www.csoonline.com\/article\/3801012\/gen-ai-strategies-put-cisos-in-a-stressful-bind.html\">CISOs in a stressful bind<\/a>, but James Lei, chief operating officer at application security testing firm Sparrow, advises CISOs to push back on the unchecked enthusiasm to introduce fundamental security practices into the deployment process.<\/p>\n<p>\u201cTo reduce these risks, organizations should be testing AI tools in the same way they would any high-risk software, running simulated attacks, checking for misuse scenarios, validating input and output flows, and ensuring that any data processed is appropriately protected,\u201d he says.<\/p>\n<p>To mitigate these risks, organizations should implement comprehensive testing strategies, such as:<\/p>\n<p><strong>Penetration testing:<\/strong> Simulating attacks to identify vulnerabilities<\/p>\n<p><strong>Bias and fairness audits:<\/strong> Ensuring AI decisions are equitable and non-discriminatory<\/p>\n<p><strong>Compliance checks:<\/strong> Verifying adherence to relevant regulations and standards<\/p>\n<p>By integrating security testing into the AI development lifecycle, organizations can harness the benefits of AI while safeguarding against potential threats.<\/p>\n<p>\u201cBefore deploying AI tools, organizations should be conducting threat modelling specific to AI systems, red-teaming for adversarial inputs, and robust testing for model drift and data leakage,\u201d ISMS.online\u2019s Peters says. \u201cAt the same time, they should integrate AI-specific controls into their risk management and compliance programs.\u201d<\/p>\n<p>Peters adds: \u201cThis is where the new <a href=\"https:\/\/www.iso.org\/standard\/81230.html\">ISO\/IEC 42001 standard<\/a> can really help. It provides a framework for governing AI responsibly, including guidance on risk assessment, data handling, security controls, and continuous monitoring.\u201d<\/p>\n<p>Other experts, while validating the need for security testing, argued that a different approach needs to be applied in testing the security of AI-based systems.<\/p>\n<p>\u201cUnlike regular software testing, you can\u2019t just look at the code of a neural network to see if it\u2019s secure,\u201d <a href=\"https:\/\/inti.io\/about\">Inti De Ceukelaire<\/a>, chief hacker officer at crowdsourced security provider Intigriti, tells CSO. \u201cEven if it\u2019s trained on clean, high-quality data, it can still behave in strange ways. That makes it hard to know when you\u2019ve tested enough.\u201d<\/p>\n<p>AI tools often offer a complex solution to a simple problem. Testers might focus only on what the tool is supposed to do and miss other things it can do. \u201cFor example, a translation tool could be tricked into opening a PDF with malicious code or accessing internal files and translating them for someone outside the company,\u201d De Ceukelaire explains.<\/p>\n<p>Organizations should consider implementing adversarial testing frameworks designed specifically for AI.<\/p>\n<p>\u201cThis includes static model analysis, dynamic prompt fuzzing, integration-layer attack simulation, and runtime behavioural monitoring,\u201d Mindgard\u2019s Garraghan says. \u201cThese practices should be embedded into the AI deployment lifecycle in the same way that DevSecOps practices are integrated into software CI\/CD pipelines.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>In their race to achieve productivity gains from generative AI, most organizations overlook the security implications of doing so, instead favoring hopes of game-changing innovations over sound security practices. According to a study from the World Economic Forum conducted in collaboration with Accenture, 63% of enterprises fail to assess the security of AI tools before [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":3223,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-3222","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/3222"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3222"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/3222\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/3223"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3222"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3222"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3222"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}