{"id":6752,"date":"2026-01-29T07:00:00","date_gmt":"2026-01-29T07:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=6752"},"modified":"2026-01-29T07:00:00","modified_gmt":"2026-01-29T07:00:00","slug":"nists-ai-guidance-pushes-cybersecurity-boundaries","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=6752","title":{"rendered":"NIST\u2019s AI guidance pushes cybersecurity boundaries"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>For years, US cybersecurity guidance rested on a reassuring premise: New technologies introduce new wrinkles, but not fundamentally new problems. Artificial intelligence, according to that view, is still software, just faster, more complex, and more powerful.<\/p>\n<p>The controls that protect traditional systems, the thinking went, can largely be adapted to protect AI, too. That assumption surfaced at a recent National Institute of Standards and Technology (NIST) <a href=\"https:\/\/www.nccoe.nist.gov\/get-involved\/attend-events\/cyber-ai-workshop-2\">workshop on AI and cybersecurity<\/a>.<\/p>\n<p>\u201cAI systems in many ways are just smart software, fancy software with a little bit extra,\u201d <a href=\"https:\/\/www.linkedin.com\/in\/victoriayanpillitteri\/\">Victoria Pillitteri<\/a>, supervisory computer scientist in the Computer Security Division at NIST, told attendees as she summarized that long-standing view. \u201cThat means we can leverage the robust body of [cybersecurity] knowledge that already exists with some modifications, with some considerations, but we do not and should not start from scratch,\u201d she added.<\/p>\n<p>But as discussions during the event turned to AI agents and adversarial manipulation, that concept began to fray. Experts described ways in which AI strains the fundamental assumptions those frameworks rely on, namely that systems behave deterministically, that boundaries between components are stable, and that humans remain firmly in control.<\/p>\n<p>Those concerns are now moving beyond internal discussion and into public standards development. On Jan. 8, NIST\u2019s Center for AI Standards and Innovation (CAISI) issued a <a href=\"https:\/\/public-inspection.federalregister.gov\/2026-00206.pdf\">formal Request for Information (RFI)<\/a> on the secure practices and methodologies of AI agent systems, one of the most challenging aspects of AI <a href=\"https:\/\/www.csoonline.com\/article\/4089732\/rethinking-identity-for-the-ai-era-cisos-must-build-trust-at-machine-speed.html\">when it comes to identity management and cybersecurity<\/a>.<\/p>\n<p>The RFI focuses on AI systems capable of taking autonomous actions that affect real-world environments and explicitly asks for input on novel risks, security practices, assessment methods, and deployment constraints.<\/p>\n<p>For CISOs, what should matter is that NIST is shifting from a broad, <a href=\"https:\/\/nvlpubs.nist.gov\/nistpubs\/ai\/NIST.AI.600-1.pdf\">principle-based AI risk management<\/a> framework toward more operationally grounded expectations, especially for systems that act without constant human oversight. What is emerging across NIST\u2019s AI-related cybersecurity work is a recognition that AI is no longer a distant or abstract governance issue, but a near-term security problem that the nation\u2019s standards-setting body is trying to tackle in a multifaceted way.<\/p>\n<h2 class=\"wp-block-heading\">NIST\u2019s wide-ranging cybersecurity and AI portfolio<\/h2>\n<p>Although the purpose of the workshop was to solicit feedback specifically on NIST\u2019s preliminary Cybersecurity Framework Profile for Artificial Intelligence (<a href=\"https:\/\/nvlpubs.nist.gov\/nistpubs\/ir\/2025\/NIST.IR.8596.iprd.pdf\">Cyber AI Profile<\/a>), which is an adaptation of the community profiles emerging from <a href=\"https:\/\/www.nist.gov\/cyberframework\">NIST\u2019s Cybersecurity Framework<\/a>, experts addressed many other NIST practices and methodology initiatives that deal with AI-related threats and security opportunities.<\/p>\n<p>These efforts show how NIST is attacking AI security from multiple angles \u2014 development, deployment, identity, privacy, and adversarial abuse \u2014 and include:<\/p>\n<p><strong>AI Risk Management Framework. <\/strong>Released on Jan. 26, 2023, <a href=\"https:\/\/www.nist.gov\/itl\/ai-risk-management-framework\">NIST\u2019s AI RMF<\/a> was developed to better manage risks to individuals, organizations, and society associated with AI. \u201cWhat we\u2019re trying to do with the AI Risk Management Framework is understand how we trust AI, which operates in many ways differently in some of these tasks that we know very well,\u201d particularly regarding how high-impact applications affect cybersecurity, <a href=\"https:\/\/www.linkedin.com\/in\/mcs729\/\">Martin Stanley<\/a>, principal researcher for AI and cybersecurity at NIST, said at the workshop.<\/p>\n<p><strong>Center for AI Standards and Innovation (CAISI). <\/strong><a href=\"https:\/\/www.nist.gov\/caisi\">NIST\u2019s CAISI<\/a> serves as the \u201cindustry\u2019s primary point of contact within the US government to facilitate testing and collaborative research related to harnessing and securing the potential of commercial AI systems,\u201d said <a href=\"https:\/\/www.linkedin.com\/in\/maiahamin\/\">Maia Hamin<\/a>, a technical staff member of CAISI, the center that develops best practices and standards for improving AI security and collaboration. It also \u201cleads evaluations and assessments of US and adversary AI systems, including adoption of foreign models, potential security vulnerabilities, or potential for foreign influence,\u201d she told workshop attendees.<\/p>\n<p><strong>NIST AI 100-2 E2025, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations. <\/strong>This <a href=\"https:\/\/csrc.nist.gov\/pubs\/ai\/100\/2\/e2025\/final\">NIST report<\/a>, published in March 2025,provides a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). \u201cAdversarial machine learning or adversarial AI is the field that studies attacks on AI systems that exploit the statistical and data-driven nature of this technology,\u201d NIST research team supervisor <a href=\"https:\/\/www.linkedin.com\/in\/avassilev\/\">Apostol Vassilev<\/a> said at the workshop. \u201cHijacking, prompt injection, indirect prompt injection, data poisoning, all these things are part of the field of study of adversarial AI,\u201d he clarified.<\/p>\n<p><strong>Dioptra. <\/strong><a href=\"https:\/\/pages.nist.gov\/dioptra\/\">Dioptra<\/a> is a NIST software test platform for assessing the trustworthy characteristics of AI. \u201cYou have multiple dimensions along which you want to analyze these as you want to identify how accurate they are for a particular task,\u201d <a href=\"https:\/\/www.linkedin.com\/in\/harold-booth-38ba4539\/\">Harold Booth<\/a>, NIST supervisory computer scientist, said at the event. \u201cYou want to be able to identify how robust they are to various kinds of attacks,\u201d Booth said. \u201cYou want to know how well they do against various kinds of data sets.\u201d<\/p>\n<p><strong>NIST SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile. <\/strong>The <a href=\"https:\/\/nvlpubs.nist.gov\/nistpubs\/SpecialPublications\/NIST.SP.800-218A.pdf\">AI SSDF community profile<\/a> adds \u201cpractices, tasks, recommendations, considerations, notes, and informative references that are specific to AI model development throughout the software development life cycle.\u201d NIST\u2019s Booth told the workshop attendees, \u201cThis particular profile is very focused on what is new with respect to doing development for AI systems. So all the concerns that exist for normal software development still pertain. But what we were really focused on was what\u2019s new.\u201d<\/p>\n<p><strong>PETs Testbed. <\/strong>NIST\u2019s <a href=\"https:\/\/www.nist.gov\/itl\/applied-cybersecurity\/privacy-engineering\/pets-testbed\">PETs Testbed<\/a> provides the capability to investigate privacy-enhancing technologies (PETs) and their respective suitability for specific use cases, helping organizations evaluate and manage privacy risks. <a href=\"https:\/\/www.linkedin.com\/in\/gshowarth\/\">Gary Howarth<\/a>, who leads the privacy engineering program at NIST, said that within a few weeks, NIST will release a new version of its privacy framework that is complementary to AI risk management and cybersecurity threat modeling.<\/p>\n<p><strong>NIST Special Publication 800-63 Digital Identity Guidelines. <\/strong>NIST recently updated its 2017 <a href=\"https:\/\/www.nist.gov\/identity-access-management\/projects\/nist-special-publication-800-63-digital-identity-guidelines\">guidelines<\/a> on digital identity to better embrace the process and technical requirements for meeting digital identity assurance levels, given the rapid pace of digital technical change. <a href=\"https:\/\/www.linkedin.com\/in\/ryan-galluzzo-a100563b\/\">Ryan Galluzzo<\/a>, identity program lead for NIST Applied Cybersecurity Division, stressed at the workshop that \u201cAI agents are starting to change the kind of context and conversation around traditional cybersecurity controls. Within the context of this project, our intent is really to focus on those issues of access, those issues of how to identify agents that are operating within my enterprise.<\/p>\n<h2 class=\"wp-block-heading\">The limits of \u2018AI is just software\u2019<\/h2>\n<p>NIST\u2019s instinct to frame AI as an extension of traditional software allows organizations to reuse familiar concepts \u2014 risk assessment, access control, logging, defense in depth \u2014 rather than starting from zero. Workshop participants repeatedly emphasized that many controls do transfer, at least in principle.<\/p>\n<p>But some experts argue that the analogy breaks down quickly in practice. AI systems behave probabilistically, not deterministically, they say. Their outputs depend on data that may change continuously after deployment. And in the case of agents, they may take actions that were not explicitly scripted in advance.<\/p>\n<p>For CISOs, the risk is not that AI is unrecognizable, but that it appears recognizable enough to lull organizations into applying controls mechanically. Treating AI as \u201cjust another application\u201d can obscure new failure modes, particularly those involving <a href=\"https:\/\/www.csoonline.com\/article\/4110008\/top-cyber-threats-to-your-ai-systems-and-infrastructure.html\">indirect manipulation through data or prompts<\/a> rather than direct exploitation of code.<\/p>\n<p>\u201cAI agent systems really face a range of security threats and risks,\u201d CAISI\u2019s Hamin said at the workshop. \u201cSome of these overlap with traditional software, but others kind of arise from the <a href=\"https:\/\/www.csoonline.com\/article\/4109999\/agentic-ai-already-hinting-at-cybersecuritys-pending-identity-crisis.html\">unique challenge of combining AI model outputs<\/a>, which are non-deterministic, with the affordances and abilities of software tools.\u201d<\/p>\n<h2 class=\"wp-block-heading\">CISOs should watch out for framework fatigue<\/h2>\n<p>In kicking off the workshop, NIST senior policy advisor <a href=\"https:\/\/www.linkedin.com\/in\/kmegas\/\">Katerina Megas<\/a> explained that NIST reached out to the CISO community to ask them what they need in terms of AI security guidance.<\/p>\n<p>\u201cBefore we started down any path, we spoke to the CISO community, and we asked them, \u2018So how are you all dealing with artificial intelligence? How is this affecting your day-to-day? Is this something that keeps you up at night?\u2019 And overwhelmingly, the answer was yes, this is absolutely something that is top of mind for us. Our leadership is asking us, what are we doing?\u201d she said at the event.<\/p>\n<p>But the CISOs also told NIST that they were overwhelmed with AI documentation. A lot of these publications had some overlap, but were not identical, Megas said. \u201cIf you were a consumer of all of these documents, it was very difficult for you to look at them and understand how they relate to what you are doing and also understand how to identify where two documents may be talking about the same thing and where they overlap.\u201d<\/p>\n<p>\u201cIf the guidance is super long, then people may not actually use it,\u201d one workshop attendee, <a href=\"https:\/\/www.linkedin.com\/in\/naveenkm94\/\">Naveen Konrajankuppam Mahavishnu<\/a>, co-founder and CTO at Aira Security, tells CSO, suggesting that much of the material can be reduced to more digestible components.<\/p>\n<p>\u201cWe can have a very detailed version, maybe a hundred pages long, but also have some sort of checklist that kind of summarizes the entire 100-page paper or something into a few pages where people can easily consume it, and then they can start implementing it,\u201d Mahavishnu says.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>For years, US cybersecurity guidance rested on a reassuring premise: New technologies introduce new wrinkles, but not fundamentally new problems. Artificial intelligence, according to that view, is still software, just faster, more complex, and more powerful. The controls that protect traditional systems, the thinking went, can largely be adapted to protect AI, too. That assumption [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":6753,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-6752","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/6752"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6752"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/6752\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/6753"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6752"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6752"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6752"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}