US dominance of agentic AI at the heart of new NIST initiative

Tags:

This week, the US National Institute of Standards and Technology (NIST) announced a new listening exercise, the AI Agent Standards Initiative, which it hopes will provide a roadmap for addressing agentic AI hurdles and, it said, ensure that the technology “is widely adopted with confidence.”

AI agents, which have now ascended to the status of enterprise tools, are designed to be autonomous and powerful: ambiguous but ominous concepts where boundaries and limits are not always easy to define or understand. The risk this poses in terms of misuse, error, and unintended consequences is striking.

However, working under the direction of the Center for AI Standards and Innovation (CAISI), set up within NIST last June to replace the Biden administration’s US AI Safety Institute, the AI Agent Standards Initiative’s remit will be broader than security alone.

Although appearing to be a re-naming of the existing initiative, CAISI’s mandate is now wider, and more overtly political. Bluntly, “CAISI aims to foster the emerging ecosystem of industry-led AI standards and protocols while cementing US dominance at the technological frontier,” said NIST’s press release.

This will mean fostering the US’s leadership in international standards bodies, open-source AI agent development, and advancing research into AI agent security and use cases. Interoperability – the ability of agents from different companies to work together – will also be a priority.

“Absent confidence in the reliability of AI agents and interoperability among agents and digital resources, innovators may face a fragmented ecosystem and stunted adoption,” NIST said. “To address this concern, NIST, including CAISI, aims to foster industry-led technical standards and protocols that build public trust in AI agents, catalyze an interoperable agent ecosystem, and diffuse their benefits to all Americans and across the world.”

More concerns

Stories of agentic AI missteps have been hard to miss recently, from the 2025 ‘EchoLeak’ vulnerability in which Microsoft 365 Copilot was used to exfiltrate data, to the sudden popularity of OpenClaw (formerly known as Moltbot and Clawdbot), a helpful agent which also opens a door for attackers to roam unseen around a user’s applications and data.

And in November, the Information Technology Industry Council, a global trade association, identified a wide range of agentic security and accountability risks including ‘jagged intelligence,’ the tendency of AI models to complete complex tasks while failing at much simpler ones. These errors could expose enterprises to unpredictable failures in automated environments, it said.

Moving too slowly

According to Gary Phipps, head of customer success at agentic AI security startup Helmet Security, a problem with NIST is that its initiatives are being outpaced by real-world developments. “History says that anything NIST comes up with will likely not emerge fast enough to address agentic AI,” said Phipps.

“From the time NIST announced it was working on the AI Risk Management Framework to the day it published the final version was roughly two years,” he noted. “In that same window, the entire generative AI landscape was born, scaled, and began reshaping enterprise security. Now we’re doing it again with agentic AI, and NIST’s answer is more RFIs, more listening sessions, more convening.”

NIST has issued a request for information (RFI) on agentic AI threats, safeguards, and assessment methods; input is due by March 9. In addition, CAISI will hold “listening sessions” in April on sector-specific barriers to AI adoption, NIST said.

NIST’s statement about “cementing US dominance at the technological frontier” is, Phipps said, “a bold thing to say about an initiative whose first concrete deliverable is a listening session in April.”

He pointed out, “Standards don’t create dominance: they follow it. The AI Risk Management Framework (RMF) is proof. It took two years to produce, and by the time it was final, the industry had largely already formed its own views on AI risk.”

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *