{"id":7969,"date":"2026-04-30T09:00:00","date_gmt":"2026-04-30T09:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=7969"},"modified":"2026-04-30T09:00:00","modified_gmt":"2026-04-30T09:00:00","slug":"stopping-the-quiet-drift-toward-excessive-agency-with-re-permissioning","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=7969","title":{"rendered":"Stopping the quiet drift toward excessive agency with re-permissioning"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>In their infancy, LLM models were not difficult to contain. You gave a prompt; they responded, and if something was wrong it was usually \u201cjust text.\u201d This could take the form of a summary that missed the best bits, a tone-deaf line or a wordy sentence.<\/p>\n<p>But then, agents were co-opted as the core reasoning layer inside AI agents, and the game changed overnight. Agents connect databases and business applications, interact with external systems and execute multi-step tasks.<\/p>\n<p>So, the question isn\u2019t only, \u201cHow capable is the model?\u201d The more important question I believe is, \u201cHow are AI agents being treated and permissioned inside your environment?\u201d<\/p>\n<p>The failures that sting aren\u2019t limited to moments when an agent spouts inaccuracies or conjures hallucinations; they also occur when the agent takes actions it shouldn\u2019t, simply because it has the capability, the permissions and the autonomy to do so.<\/p>\n<h2 class=\"wp-block-heading\">The shift from answering to execution<\/h2>\n<p>I\u2019m seeing interoperability accelerate agent adoption. Standards like the Model Context Protocol (MCP) are making it easier for models to connect with tools and data sources, while agent-to-agent approaches allow agents to exchange context, goals and actions across workflows.<\/p>\n<p>More connections mean more reach, and more reach means more room for things to go wrong.<\/p>\n<p>With AI spending forecasted to hit<a href=\"https:\/\/www.gartner.com\/en\/newsroom\/press-releases\/2026-1-15-gartner-says-worldwide-ai-spending-will-total-2-point-5-trillion-dollars-in-2026\"> <\/a><a href=\"https:\/\/www.gartner.com\/en\/newsroom\/press-releases\/2026-1-15-gartner-says-worldwide-ai-spending-will-total-2-point-5-trillion-dollars-in-2026\">$2.5 trillion<\/a> in 2026, and with<a href=\"https:\/\/www.gartner.com\/en\/newsroom\/press-releases\/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025\"> <\/a><a href=\"https:\/\/www.gartner.com\/en\/newsroom\/press-releases\/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025\">40%<\/a> of enterprise apps expected to embed task-specific AI agents by the end of 2026, the real question is no longer about adoption, it\u2019s about visibility and control. With numbers like these, it is clear that AI integration is scaling quickly, but there is a security gap.<\/p>\n<p>While AI security checks are catching up quickly, rising from 37% in 2025 to<a href=\"https:\/\/www.weforum.org\/publications\/global-cybersecurity-outlook-2026\/digest\/\"> <\/a><a href=\"https:\/\/www.weforum.org\/publications\/global-cybersecurity-outlook-2026\/digest\/\">64%<\/a> in 2026, that still leaves over a third without a formal assessment. This is why the right permissioning often lags behind.<\/p>\n<p>As I have observed, when agents operate across multiple tools and systems, organizations are no longer managing just \u201cAI output quality.\u201d They\u2019re managing action pathways, often in environments where it\u2019s difficult to pinpoint where a request went wrong, where an input was manipulated, or which step triggered the final action. Permissioning, in this context, becomes the difference between useful automation and unauthorized behavior at scale.<\/p>\n<h2 class=\"wp-block-heading\">Excessive agency directly proportional to over-permissioning<\/h2>\n<p>Organizations are worried about the level of autonomy AI introduces into their operational framework. Nearly<a href=\"https:\/\/www.itpro.com\/technology\/artificial-intelligence\/workers-cant-identify-work-produced-by-ai-agents-business-risks\"> three-quarters<\/a> of organizations say agents often receive more access than necessary. It\u2019s this excessive agency that needs to be reined in.<\/p>\n<p>In practice, unchecked autonomy within a particular workflow means the agent can access systems it doesn\u2019t need, execute actions outside its predetermined role and interact with external systems beyond predefined parameters. This means organizations are not just looking at a \u2018wrong answer\u2019 as the biggest risk, but \u2018unauthorized action.\u2019 This action may involve unintended data exposure, unauthorized commands or integrity-impacting changes that are difficult to unwind.<\/p>\n<p>Over-permissioning is a sneaky beast. I\u2019ve seen it slowly creep into agentic AI workflows, usually driven by three common factors:<\/p>\n<p>The people in charge, in their \u2018wisdom,\u2019 enable a broad range of tools\/APIs to make the agent even more useful.<\/p>\n<p>There might be some integration problems, and elevated access is given to make integration work smoothly, which means extra permissions that exceed the safe-use threshold.<\/p>\n<p>Agents can decide with fewer human checkpoints, especially for actions that have a tangible impact. This can stem from a blind trust in AI and a focus on being an execution-first business.<\/p>\n<h2 class=\"wp-block-heading\">3 systemic risks in agentic AI workflows<\/h2>\n<p>Less than<a href=\"https:\/\/www.ajg.com\/uk\/news-and-insights\/features\/ai-adoption-and-risk-benchmarking-2026\/\"> half <\/a>of businesses have adopted formal risk management frameworks for AI, and I believe that\u2019s where the real challenge with agentic AI begins. It\u2019s not about what it can do, but that its actions become harder to observe and govern once it operates across connected systems.<\/p>\n<p>First, many models are effectively black boxes. Opaque internal workings make it harder to verify outputs, explain decisions or confidently audit what happened after the fact.<\/p>\n<p>Second, capability invites overreliance. In conversations I\u2019ve had with CISOs, a consistent theme emerges. As agents appear to \u201chandle it,\u201d humans step back and critical reviews thin out. The result is mistakes and biases persisting longer because fewer people are watching closely, especially dangerous in high-stakes environments.<\/p>\n<p>Thirdly, attackers don\u2019t need to compromise the model itself if they can compromise what the agent reads or the services feeding it. Connected workflows create supply-chain-style attack modes, where upstream manipulation becomes the lever.<\/p>\n<h2 class=\"wp-block-heading\">The road toward re-permissioning: Controlling agency<\/h2>\n<p>Re-permissioning is not about limiting the autonomy of AI agents, but more about controlling them appropriately. AI agents execute, and we need them to execute well, but we must implement a continuous permission audit to identify agents slowly climbing the \u2018agency\u2019 ladder.<\/p>\n<p>Organizations must have complete visibility so they can evaluate agentic AI interactions, flag irregular behaviors, verify if permissions conform to policy and use tabletop real-world exercises like prompt-injection tests to guard against vulnerabilities. Also, subscribe to a human-in-the-loop workflow in which human oversight is mandatory when sensitive data, financial decisions, access changes or major operational updates are involved.<\/p>\n<p>It\u2019s also necessary to avoid giving agents tools \u2018just in case they need them.\u2019 Instead, implement least-privilege context sharing, limiting the agent\u2019s view and tool access to only what the task truly requires.<\/p>\n<p>Finally, let me emphasize that you shouldn\u2019t forget the agent AI supply chain that includes integration, libraries, APIs and third parties. These need to be vetted, patched and secured with tight network controls to build a trusted ecosystem and reduce the risk of upstream manipulation.<\/p>\n<p>If AI agents are treated like harmless helpers, they\u2019ll be permissioned like harmless helpers, and excessive agency becomes normalized.<\/p>\n<p>We must pump the brakes on the inevitability of unchecked autonomy. Take control of broader functionality and permissions; focus on instilling oversight where it matters. Agents can enhance operations, but only if they\u2019re governed as actors within guardrails and not trusted by default.<\/p>\n<p><strong>This article is published as part of the Foundry Expert Contributor Network.<\/strong><br \/><strong><a href=\"https:\/\/www.csoonline.com\/expert-contributor-network\/\">Want to join?<\/a><\/strong><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>In their infancy, LLM models were not difficult to contain. You gave a prompt; they responded, and if something was wrong it was usually \u201cjust text.\u201d This could take the form of a summary that missed the best bits, a tone-deaf line or a wordy sentence. But then, agents were co-opted as the core reasoning [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":7970,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-7969","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/7969"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7969"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/7969\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/7970"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7969"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7969"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7969"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}