The Trump administration’s decision to ban AI company Anthropic from Pentagon assets and other government systems as a “supply chain risk” could force CISOs into a position few have faced before: preparing to identify, isolate, and potentially remove a specific AI technology from across their organizations without a clear understanding of where it resides or how deeply it is embedded.
While the administration is defending the designation in federal court as a legitimate national security and supply chain measure, the practical burden is already shifting to enterprises, particularly government contractors that may soon be expected to prove they are no longer using the company’s technology in any form.
“It’s basically impossible … to say with a high degree of confidence they removed Anthropic from everything in their environment,” Tom Pace, CEO of NetRise, tells CSO, capturing the problem in operational terms.
That difficulty stems from a longstanding gap that is now becoming unavoidable. Most enterprises do not maintain a complete or current inventory of how AI systems are used across their environments, nor do they fully understand how those systems are embedded across their networks.
AI models may be accessed directly through APIs, embedded in internally developed applications, incorporated into developer workflows, or introduced indirectly through third-party software and services. In many cases, those dependencies are invisible to central security teams, particularly in organizations where experimentation with generative AI has outpaced governance.
Even so, the Pentagon is moving aggressively to strip Anthropic technology from both its internal networks and those of contractors. A March 6 Pentagon memo directs military components to remove Anthropic products from systems and networks within 180 days, prioritizing mission-critical environments such as nuclear, missile defense, and cyber operations.
The directive also requires contracting officers to notify vendors and obligates contractors to certify compliance within the same timeframe, effectively extending the requirement across the defense industrial base.
The administration’s actions mark a shift in how AI technologies are treated in the national security context. Models are no longer just tools; they are being treated as regulated components of the supply chain. For CISOs, that shift introduces a new class of risk — one that combines policy uncertainty, technical opacity, and potentially aggressive compliance timelines.
A mandate that assumes visibility CISOs don’t yet have
On paper, the Pentagon’s directive follows a familiar pattern. It establishes a deadline, prioritizes critical systems, cascades requirements to contractors, and allows only limited exemptions under controlled conditions. Similar frameworks have been used in past efforts to remove specific vendors from federal systems, particularly in telecommunications.
What distinguishes the Anthropic case is the nature of the technology involved. Unlike hardware or traditional software components, AI systems are not easily enumerated. A single model can be accessed through multiple interfaces, embedded in different applications, or wrapped in layers of tooling that obscure its origin. Dependencies can also be transitive, appearing through libraries, plugins, or services integrated into broader systems.
That complexity makes the first step — identifying where Anthropic is used — far more difficult than the directive implies. Pace likened the challenge to the industry’s experience with Log4j, where organizations struggled to locate a widely used component buried across sprawling software ecosystems. In the case of AI, the problem is compounded by the fact that not all dependencies behave like traditional software artifacts — or are even visible as such.
The lack of visibility is reflected in broader industry readiness. According to Cisco’s 2025 AI Readiness Index, only 31% of organizations say they are fully equipped to secure agentic AI systems, while just 27% report having granular access controls over AI systems and datasets. Those figures suggest that even basic governance over AI usage remains incomplete across much of the enterprise landscape, leaving organizations poorly positioned to respond to a directive that assumes a level of insight many do not yet have.
Compliance pressure before policy clarity
For organizations that do business with the federal government, the implications extend beyond technical challenges into legal and contractual risk. Alex Major, co-chair of government contracts and global trade practice at law firm McCarter and English, tells CSO that supply chain designations like the Anthropic ban tend to move quickly from policy statements into enforceable requirements, even when formal acquisition rules lag.
“You can’t manage what you haven’t found,” Major says, emphasizing that the immediate task for CISOs is to determine where Anthropic dependencies exist across their systems and supplier networks.
That process, he says, must be approached as both a technical and a compliance exercise. Organizations may need to document how they identified affected systems, what steps they took to remove or replace components, and how they validated that those steps were effective. In a certification environment, the ability to demonstrate due diligence can be as important as the technical outcome.
At the same time, Major cautioned against acting too quickly in regulated environments without appropriate controls.
“Slow down,” he advises. “Get your supply chain analysis in shape and don’t do anything until those things have happened.” He adds, “If you’re moving quickly, the compliance risk of a hasty removal in a sensitive environment can exceed the compliance risk of a deliberate, documented transition plan.”
No agreement on when to act
That tension is reflected in the lack of consensus among experts about how CISOs should respond in the near term. The Pentagon’s directive provides a clear signal for defense-related systems, but the broader policy landscape remains unsettled, leaving organizations to interpret how aggressively to act.
Daniel Bardenstein, CEO and co-founder of Manifest, argues that the current policy framework does not yet provide the specificity needed to justify sweeping changes across enterprise environments. “It is not an executive order,” he tells CSO. “It’s not an OMB memo.”
He described the guidance as incomplete and insufficiently detailed to translate into operational requirements, particularly given the complexity of AI systems and the existing gaps in software supply chain security.
Pace takes a more pragmatic view for organizations already operating within federal environments. “If you are part of the federal government, you have to remove all evidence and use of Anthropic, period,” he says.
At the same time, Pace acknowledged that many organizations are likely to delay action until requirements are formalized across procurement and regulatory frameworks. That hesitation reflects a broader uncertainty about how to respond to a policy that is still evolving, even as early enforcement signals emerge.
The visibility problem predates AI
The difficulty of identifying AI dependencies is not entirely new. It builds on longstanding challenges in software supply chain visibility, where organizations have struggled to maintain accurate inventories of the components in their systems.
Chris Wysopal, founder and chief security evangelist of Veracode, tells CSO that the Anthropic situation highlights how those challenges are now extending into AI. “It’s a huge change for people selling software to the federal government,” he says, noting that companies are being asked to account for the models inside their products in ways they have not previously had to do.
Wysopal said that some form of bill of materials can help organizations determine whether a specific technology appears in their software, particularly when responding to customer or regulatory requirements. At the same time, he cautioned that replacing models may not be trivial if applications have been built around specific capabilities, requiring adjustments to code, workflows, and testing processes.
AI-BOM or SBOM?
The question of how to achieve that visibility has sparked an active debate about whether existing software bill of materials (SBOM) frameworks are sufficient for AI, or whether organizations need a new approach.
Amy Chang, leader of AI threat intelligence and security researcher at Cisco Systems, argues that traditional SBOMs do not capture the full scope of AI systems. “AI systems include models, agents, prompts, and data,” she says. “If you only track packages, you’re missing how the system actually functions.”
Her view is that organizations need a more dynamic representation of how AI systems operate, including how models interact with data and other components, to understand risk and manage change effectively.
Allan Friedman, the “father” of SBOM and now technologist in residence at TPO group, offers a more measured perspective. He agrees that transparency is essential but cautions against assuming that visibility alone will solve the problem.
“Transparency will not solve all your problems,” he tells CSO, noting that organizations must integrate that information into broader risk management processes. “We still need a red team, and some of these basic security techniques to remind people that SBOM has never picked up my dry cleaning, not once,” he adds. “So thinking about how you take that transparency data and integrate it into your broader supply program is going to be important.”
NetRise’s Pace rejects the premise that AI requires its own new bill of materials category, arguing that a properly implemented SBOM should already capture AI-related components. In his view, the problem is not the absence of a new framework, but the incomplete adoption of existing ones. “AI-BOMs are stupid,” he says. “There’s no such thing as an AI-BOM. You have an SBOM, which identifies AI components. AI is software, last time I checked.”
The disagreement reflects a deeper uncertainty about how to model AI supply chain risk at a time when organizations are being asked to act on it.
Removal is not the same as replacement
Even if organizations can identify where Anthropic technology is used, removing it is only part of the challenge. Replacement introduces its own set of complexities, particularly when applications have been designed around specific model behaviors.
Dependencies may be embedded deep within applications or introduced through third-party software, requiring coordination across vendors and development teams. In some cases, replacing a model may require reworking prompts, retraining systems, or revalidating outputs to ensure that functionality and performance are maintained.
Anand Oswal, EVP at Palo Alto Networks, emphasizes that visibility is only one component of a broader security strategy. Organizations also need continuous discovery, testing, and runtime controls to manage AI risk as systems evolve.
“You need a full AI security solution,” he tells CSO, arguing that AI systems are dynamic, with models, data, and behaviors that change over time, making static inventories insufficient without ongoing monitoring and governance. “You want complete visibility into your AI applications, your AI agents, your AI tools, your plugins, the data they’re accessing, everything around that whole infrastructure of AI that is being used to build your applications or agents. Once you do that, that’s discovery. It’s a good thing. It’s a start.”
A new category of supply chain risk
The Anthropic case represents a shift in how governments approach AI technologies, treating models and their associated ecosystems as supply chain components that can be restricted or removed.
For CISOs, the challenge is not simply responding to a single directive, but preparing for a future in which similar actions could be applied to other AI providers not only by the US government, but also by regulators and customers. That requires visibility into AI dependencies, clarity about how those dependencies are used, and a strategy for replacing them without disrupting critical systems.
As those expectations take shape, organizations are being asked to operate at a level of insight and control that many have not yet achieved. As Friedman cautions, “Everyone is moving quickly to build on these systems without really understanding what’s inside them.”
Greater collaboration across the software and AI supply chain may eventually make that problem more manageable, he said, but for now the gap between what organizations are expected to know and what they can actually see remains wide.
No Responses