{"id":7172,"date":"2026-02-19T21:30:36","date_gmt":"2026-02-19T21:30:36","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=7172"},"modified":"2026-02-19T21:30:36","modified_gmt":"2026-02-19T21:30:36","slug":"us-dominance-of-agentic-ai-at-the-heart-of-new-nist-initiative","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=7172","title":{"rendered":"US dominance of agentic AI at the heart of new NIST initiative"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>This week, the US National Institute of Standards and Technology (NIST) announced a new listening exercise, the <a href=\"https:\/\/www.nist.gov\/caisi\/ai-agent-standards-initiative\" target=\"_blank\" rel=\"noopener\">AI Agent Standards Initiative<\/a>, which it hopes will provide a roadmap for addressing agentic AI hurdles and, it said, ensure that the technology \u201cis widely adopted with confidence.\u201d<\/p>\n<p>AI agents, which have now ascended to the status of enterprise tools, are designed to be autonomous and powerful: ambiguous but ominous concepts where boundaries and limits are not always easy to define or understand. The risk this poses in terms of misuse, error, and unintended consequences is striking.<\/p>\n<p>However, working under the direction of the Center for AI Standards and Innovation (CAISI), <a href=\"https:\/\/www.csoonline.com\/article\/4027604\/white-house-ai-plan-heavy-on-cyber-light-on-implementation.html\" target=\"_blank\" rel=\"noopener\">set up within NIST<\/a> last June to replace the Biden administration\u2019s US AI Safety Institute, the AI Agent Standards Initiative\u2019s remit will be broader than security alone.<\/p>\n<p>Although appearing to be a re-naming of the existing initiative, CAISI\u2019s mandate is now wider, and more overtly political. Bluntly, \u201cCAISI aims to foster the emerging ecosystem of industry-led AI standards and protocols while cementing US dominance at the technological frontier,\u201d said NIST\u2019s press release.<\/p>\n<p>This will mean fostering the US\u2019s leadership in international standards bodies, open-source AI agent development, and advancing research into AI agent security and use cases. Interoperability \u2013 the ability of agents from different companies to work together \u2013 will also be a priority.<\/p>\n<p>\u201cAbsent confidence in the reliability of AI agents and interoperability among agents and digital resources, innovators may face a fragmented ecosystem and stunted adoption,\u201d NIST said. \u201cTo address this concern, NIST, including CAISI, aims to foster industry-led technical standards and protocols that build public trust in AI agents, catalyze an interoperable agent ecosystem, and diffuse their benefits to all Americans and across the world.\u201d<\/p>\n<h2 class=\"wp-block-heading\">More concerns<\/h2>\n<p>Stories of agentic AI missteps have been hard to miss recently, from the <a href=\"https:\/\/www.csoonline.com\/article\/4005965\/first-ever-zero-click-attack-targets-microsoft-365-copilot.html\" target=\"_blank\" rel=\"noopener\">2025 \u2018EchoLeak\u2019 vulnerability<\/a> in which Microsoft 365 Copilot was used to exfiltrate data, to the sudden <a href=\"https:\/\/www.csoonline.com\/article\/4129867\/what-cisos-need-to-know-about-clawdbot-i-mean-moltbot-i-mean-openclaw.html\" target=\"_blank\" rel=\"noopener\">popularity of OpenClaw<\/a> (formerly known as Moltbot and Clawdbot), a helpful agent which also opens a door for attackers to roam unseen around a user\u2019s applications and data.<\/p>\n<p>And in November, the Information Technology Industry Council, a global trade association, identified a wide range of <a href=\"https:\/\/www.executivegov.com\/articles\/iti-agentic-ai-risks-policy-recommendations\" target=\"_blank\" rel=\"noopener\">agentic security and accountability risks<\/a> including \u2018jagged intelligence,\u2019 the tendency of AI models to complete complex tasks while failing at much simpler ones. These errors could expose enterprises to unpredictable failures in automated environments, it said.<\/p>\n<h2 class=\"wp-block-heading\">Moving too slowly<\/h2>\n<p>According to <a href=\"https:\/\/www.linkedin.com\/in\/gary-w-phipps\/\" target=\"_blank\" rel=\"noopener\">Gary Phipps<\/a>, head of customer success at agentic AI security startup Helmet Security, a problem with NIST is that its initiatives are being outpaced by real-world developments. \u201cHistory says that anything NIST comes up with will likely not emerge fast enough to address agentic AI,\u201d said Phipps.<\/p>\n<p>\u201cFrom the time NIST announced it was working on the AI Risk Management Framework to the day it published the final version was roughly two years,\u201d he noted. \u201cIn that same window, the entire generative AI landscape was born, scaled, and began reshaping enterprise security. Now we\u2019re doing it again with agentic AI, and NIST\u2019s answer is more RFIs, more listening sessions, more convening.\u201d<\/p>\n<p>NIST has issued a <a href=\"https:\/\/www.nist.gov\/news-events\/news\/2026\/01\/caisi-issues-request-information-about-securing-ai-agent-systems\" target=\"_blank\" rel=\"noopener\">request for information<\/a> (RFI) on agentic AI threats, safeguards, and assessment methods; input is due by March 9. In addition, CAISI will hold \u201clistening sessions\u201d in April on sector-specific barriers to AI adoption, NIST said.<\/p>\n<p>NIST\u2019s statement about \u201ccementing US dominance at the technological frontier\u201d is, Phipps said, \u201ca bold thing to say about an initiative whose first concrete deliverable is a listening session in April.\u201d <\/p>\n<p>He pointed out, \u201cStandards don\u2019t create dominance: they follow it. <a href=\"https:\/\/www.csoonline.com\/article\/4123196\/nists-ai-guidance-pushes-cybersecurity-boundaries.html\" target=\"_blank\" rel=\"noopener\">The AI Risk Management Framework (RMF)<\/a> is proof. It took two years to produce, and by the time it was final, the industry had largely already formed its own views on AI risk.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>This week, the US National Institute of Standards and Technology (NIST) announced a new listening exercise, the AI Agent Standards Initiative, which it hopes will provide a roadmap for addressing agentic AI hurdles and, it said, ensure that the technology \u201cis widely adopted with confidence.\u201d AI agents, which have now ascended to the status of [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":7173,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-7172","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/7172"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7172"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/7172\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/7173"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7172"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7172"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7172"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}