{"id":6841,"date":"2026-02-05T02:42:45","date_gmt":"2026-02-05T02:42:45","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=6841"},"modified":"2026-02-05T02:42:45","modified_gmt":"2026-02-05T02:42:45","slug":"1-5-million-ai-agents-are-at-risk-of-going-rogue","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=6841","title":{"rendered":"1.5 million AI agents are at risk of going rogue"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>A study released Wednesday by API management platform vendor Gravitee indicates that upwards of half of the three million agents currently in use by organizations in the US and UK \u201care ungoverned and at the risk of going rogue.\u201d<\/p>\n<p>Based on a December 2025 <a href=\"https:\/\/www.einpresswire.com\/article\/889263114\/gravitee-warns-of-invisible-risk-nearly-half-of-ai-agents-run-without-oversight\" target=\"_blank\" rel=\"noopener\">survey of 750 IT executives and practitioners<\/a> conducted by Opinion Matters, the results revealed that AI agents are being deployed faster than security teams can keep up. There are, said <a href=\"https:\/\/www.linkedin.com\/in\/rory-blundell-a7545832\/\" target=\"_blank\" rel=\"noopener\">Rory Blundell<\/a>, CEO of Gravitee, now over three million AI agents operating within corporations, which he described as a workforce larger than the entire global employee count at Walmart.<\/p>\n<p>The three million number is based on an extrapolation of survey results, based on government estimates of 8,250 UK businesses and 77,000 US businesses that employ 250 employees or more. The mean number of AI agents deployed per business is 36.9, and when respondents were asked if their organization \u201cexperienced or suspected an AI agent-related security or data privacy incident in the past 12 months,\u201d 88% said that they had.<\/p>\n<p>The mean percentage of agents that are not actively monitored and secured, according to the findings, was 53%<\/p>\n<p>Asked what prompted the study, Blundell wrote in an email, \u201cwe\u2019re all familiar with stories of AI agents going rogue: deleting codebases, leaking confidential information, inventing fake data.\u00a0The working hypothesis that prompted this research was that, while agentic deployment is reaching an exciting stage, businesses have not yet caught up with agent governance. The research validates that.\u201d<\/p>\n<h2 class=\"wp-block-heading\">A global problem<\/h2>\n<p>Agents, he said, \u201ccan offer businesses a huge productivity gain, but we have to be realistic about the risks: without governance and oversight, they can easily start becoming liabilities, and a danger to consumers and businesses alike.\u201d<\/p>\n<p>In addition, said Blundell, despite respondents being only from the UK and US, \u201cthis is absolutely a global problem. Companies around the world are using AI agents, and across the board there is a gap between the level of deployment and the level of governance. We have a strong customer base in the EU, where we see the same problems.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/dbshipley\/\" target=\"_blank\" rel=\"noopener\">David Shipley<\/a>, head of Canadian-based security awareness training firm Beauceron Security, said, \u201cthe only thing that shocks me is that people think it\u2019s only 53% of agents that aren\u2019t monitored. It\u2019s higher.\u201d<\/p>\n<p>He likened the results from the Gravitee study to a \u201clesson about the Titanic that everyone in technology keeps ignoring.\u00a0The Titanic disaster didn\u2019t happen because they didn\u2019t know there would be icebergs on the trip. They knew it was peak iceberg season, they knew they were going too fast.\u201d\u00a0<\/p>\n<p>Shipley said that the ship\u2019s captain and his crew \u201cthought they\u2019d detect [an iceberg]; if they didn\u2019t, and hit one, that their technology controls would protect them to help them recover.\u201d They put their faith in the so-called watertight compartments that, it turned out, weren\u2019t watertight at the top, but, most importantly, they trusted the new wireless communications technology that they could use to call for help if they got in trouble. The equivalent today: \u201cWell, IT and security can fix it if we get in trouble with our agents.\u201d<\/p>\n<p>\u201cWrong then, super wrong now,\u201d he said.<\/p>\n<p>He said, \u201cwe know AI agents are inherently dangerous and unreliable. There\u2019s literally math proofs out there that show it. So, we know there are icebergs. Let me repeat this for those at the back of the room: 100% of AI agents have the potential to go rogue. If a vendor assures you it isn\u2019t possible and their core technology is an LLM, they\u2019re lying.\u00a0We know we\u2019re going too fast in adoption for the risks we know exist.\u201d<\/p>\n<p>Shipley added, \u201cnow, the funny part: imagine if the Titanic still made the choices it did, knowing the watertight compartments didn\u2019t work (aka monitoring is missing for 53% of AI agents), we know by the time IT and security roll on an AI agent risk, the damage is done (the ship\u2019s sinking too fast and radio isn\u2019t going to help because help will be too late). And we still made the choices we\u2019re making.\u201d\u00a0<\/p>\n<h2 class=\"wp-block-heading\">The real issue is invisible AI, not rogue AI<\/h2>\n<p><a href=\"https:\/\/www.infotech.com\/profiles\/manish-jain\" target=\"_blank\" rel=\"noopener\">Manish Jain<\/a>, principal research director at Info-Tech Research Group, said that as the \u201cexponential\u201d speed of AI development continues, his firm, based on experiences with CIOs and CDOs, predicts that there will be more AI agents globally by the year 2028 than the number of human employees. \u201cIt would be one of the biggest challenges for business and IT executives to govern them without curtailing the innovation that these AI agents bring with them,\u201d he said.<\/p>\n<p>Even today, he noted, \u201cwe see that most enterprise AI agents are running without oversight. Many organizations don\u2019t even know how many agents they have, where they\u2019re running, or what they can touch. If you don\u2019t know how many mules are in the barn, don\u2019t act surprised when one kicks the door down.\u201d<\/p>\n<p>Jain pointed out that AI agents are no different. \u201cUnaccounted agents often emerge through sanctioned, low-code tools and informal experimentation, bypassing traditional IT scrutiny until something breaks. You cannot govern what you can\u2019t see. So, we need to understand that the real issue isn\u2019t \u2018rogue AI\u2019, it\u2019s invisible AI.\u201d<\/p>\n<p>\u00a0Info-Tech, he added, \u201cstrongly believes that governing AI models or pre-approving agents is no longer enough, because invisible, rogue agents will do <a href=\"https:\/\/en.wikipedia.org\/wiki\/Tandava\" target=\"_blank\" rel=\"noopener\">tandava<\/a> (the dance of destruction) at runtime. This is because, when it comes to governing these AI agents, the number is so huge that approval gates will not be sustainable without halting the innovation. Continuous oversight should be the priority for AI governance after setting initial guardrails as part of the AI strategy.\u201d<\/p>\n<p>Perspective, he said, also needs to change: \u201cAI agents are no longer helpful bots. They often operate with delegated yet broad credentials, persistent access, and undefined accountability. This can become a costly mistake as overprivileged agents are the new insider threat. We need to define tiered access for AI agents. While we can\u2019t avoid giving a few people keys to our house to speed up things, if you trust every stranger with your house keys, we wouldn\u2019t be able to blame the locksmith when things go missing.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>A study released Wednesday by API management platform vendor Gravitee indicates that upwards of half of the three million agents currently in use by organizations in the US and UK \u201care ungoverned and at the risk of going rogue.\u201d Based on a December 2025 survey of 750 IT executives and practitioners conducted by Opinion Matters, [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":6842,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-6841","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/6841"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6841"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/6841\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/6842"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6841"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6841"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6841"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}