NIST’s attempts to secure AI yield many questions, no answers

Tags:

When the US National Institute of Standards and Technology (NIST) late last week published a concept paper on how enterprises can protect themselves from AI systems, it focused on categorizing the problems without suggesting any specific mitigation tactics. 

For that, the organization turned to the industry and asked for suggestions.

“NIST is interested in feedback on the concept paper and proposed action plan, and invites all interested parties to join the NIST Overlays for Securing AI (#NIST-Overlays-Securing-AI) Slack channel,” the page describing the document said. “Through the Slack channel, stakeholders can contribute to the development of these overlays, get updates, engage in facilitated discussions with the NIST principal investigators and other subgroup members, and provide real-time feedback and comments.”

Analysts and security industry advocates see the challenges of AI security controls as extensive, but that’s mostly because enterprises are now using—or fighting—AI in so many different ways. 

From a technical NIST perspective, the group said that it wants to tweak its current rules to accommodate AI controls, as opposed to creating something new. Specifically, NIST said that it wants to build on top of NIST Special Publication (SP) 800-53 controls. This provides the core NIST cybersecurity protections dealing with traditional defense issues including access control, awareness, audit, incident response, contingency planning and risk assessment.

Building on existing rules makes sense

“The decision to anchor these overlays in SP 800-53 controls demonstrates sophisticated strategic thinking. Organizations already possess institutional knowledge around these frameworks,” said Aaron Perkins, founder at Market-Proven AI. “They understand implementation processes, have established assessment methodologies, and most importantly, their teams know how to work within these structures. This familiarity eliminates one of the most significant barriers to effective AI security by removing the learning curve that accompanies entirely new approaches.”

Forrester Senior Analyst Janet Worthington agreed that leveraging an existing NIST framework makes sense. 

“Overlays are a natural extension, as many organizations are already familiar with SP 800-53, offering flexibility to tailor security measures to specific AI technologies and use cases while integrating seamlessly with existing NIST, ” Worthington said. “These overlays are specifically crafted to safeguard the confidentiality, integrity, and availability of critical AI components.”

Challenges to consider

The NIST paper talked about various categories of AI integration that forced serious cybersecurity considerations, including: using genAI to create new content; fine-tuning predictive AI; using single AI agents as well multiple agents; and security controls for AI developers. 

The potentially most challenging element of securing AI in enterprises is visibility. But the visibility problem takes many forms, including visibility into what the model makers train on and how the models are coded to make recommendations, how enterprise data fed into the models is used, the copyright, patent and other legal protections attached to the data that was used for training, how much AI is being used in SaaS apps and cloud deployments, and how employees, contractors and third-parties are using genAI globally.

If CISOs don’t have meaningful visibility into all of those issues, the task of securing the information that flows into and out of those models is close to impossible.

Many cybersecurity specialists were not sure how the tidal wave of AI activity, much of it deployed in enterprises with seemingly insufficient due diligence beforehand, can now be properly secured. 

Assume a doomsday scenario

“AI was all the hype at RSA, Blackhat and Defcon. It was at the beginning and end of every vendor sentence,” said Jeff Mann, an industry veteran who today serves as the senior information security consultant at Online Business Systems. “It was amazing how AI was going to solve all of the problems [and] we were also discovering amazing vulnerabilities.”

Mann also stressed the visibility issues, especially in terms of how AI is deployed company-wide. “Have an inventory and know what you are dealing with. But I am not sure it’s even possible to take a complete inventory of what is out there. You have to assume a doomsday scenario.”

Another longtime cybersecurity observer, Brian Levine, managing director at Ernst & Young and the CEO of a directory of former government/military security experts called FormerGov, sees much of the AI security challenge coming from how extensively it is being used for almost every business function — and how little it was tested beforehand.

“We are seeing that AI is becoming ubiquitous, and executives rushed to use it before they fully understood it and could grapple with the security issues,” Levine said. “[AI] is a little bit of a black box and everyone was rushing to incorporate it into everything they were doing. Over time, the more you outsource technology, the more risk you are taking.”

Inventory visibility priority one

Zach Lewis, the CIO and CISO at the University of Health Sciences and Pharmacy in St. Louis, Missouri also put visibility at the top of his AI risk list. 

“You can’t patch what you don’t know is running. That applies to AI, too. NIST should make AI model inventories step one,” Lewis said. “If companies don’t even know which models employees are using, the rest of the controls don’t matter.”

It’s also often necessary to assume that all AI is already poisoned, given that it is the only safe assumption, noted Audian Paxson, principal technical strategist at Ironscales.

“Assume every AI model in your environment will be weaponized. That means implementing adversarial robustness at the model level, essentially teaching your defensive AI to expect lies. Think of it like training a boxer by having them spar with dirty fighters,” Paxson said. “Your models need to learn from poisoned data attempts, prompt injection attacks, and model inversion techniques before they hit production.”

Paxson suggested extending the assume-the-worst thinking to all AI security strategies. 

“When you’re thinking about best practices for securing ML pipelines and training data, start with the assumption your training data is already compromised because it probably is. Implement differential privacy and regular model health checks essentially asking your AI if it feels poisoned,” Paxson said. “Use federated learning where possible so sensitive data never centralizes. Most importantly, implement model retirement dates. An AI model trained six months ago is like milk left on the counter. It’s probably gone bad.”

Carefully review existing security tools

Forrester’s Worthington stressed that CISOs need to carefully review all current cybersecurity tools because they may not be especially effective at protecting the enterprise from relatively new AI threats.

“AI agents and agentic systems introduce new risks that traditional security models are ill-equipped to manage,” Worthington said. “We are seeing growing concerns around the lack of mature detection surfaces, the risk of cascading failures, and the challenge of securing intent rather than just outcomes.”

Vince Berk, partner at Apprentis Ventures, was even more skeptical that current standards efforts will be able to make a meaningful difference in protecting companies from AI threats. 

“It is fantastic that we have the Institute for Standards to provide us a guidepost to manage our AI risks by. However, standards are typically formed after a large body of experiences have been gathered, and a common approach or vision to a particular area of engineering starts to form. For AI cybersecurity problems, this is very far from the case,” Berk said. “Every day, new cases are discovered that were unanticipated and raise questions about the utility in a broad sense from AI at all.”

He added that the nature of NIST might not make it the best source for such guidelines. “For now, a better place for these sorts of controls would be CIS or OWASP,” Berk said.

What if AI floods the comments?

Erik Avakian, technical counselor at Info-Tech Research Group, said that he applauds NIST’s efforts to reach out for community feedback, but he also cautioned that it might backfire. For example, what if AI agents flood the comments with self-serving suggestions?

Such an AI comment flood could do various bad things, he said, including making the final recommendations “bad guy friendly” or simply “AI poisoning the actual feedback.” If that attack happened, Avakian said, the best response from NIST would be to conduct in-person interviews. “That would be the only way. Maybe human interviews or regional workshops where they bring people in,” he said.

Although Avakian said that the initial NIST report is “certainly a welcome start, there are potential risks that the overlays may not go far enough as they relate to emerging attack vectors unique to AI. The [report] addresses fundamentals such as model integrity and access control, but these alone might not dig deep enough into addressing cutting-edge attack vectors unique to AI,” Avakian said.

“Advanced threat scenarios could slip through the cracks,” he said. “In addition, they might not go far enough when it comes to people-related risks such as insider misuse, shadow AI adoption, or common human issues such as errors, omissions, and mistakes. Many AI systems and architectures also vary widely, and the overlays could benefit from more granularity to truly fit the diversity we’re seeing across real-world AI deployments.”

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *