Two recent high-profile events concerning Anthropic’s Claude AI underscore a little-discussed risk at the heart of the enterprise’s rush to capitalize on leading AI capabilities.
The first incident involved a China-based extraction campaign against Anthropic’s intellectual property. The second was the Trump administration’s banning of Claude for federal use after the company resisted US demands to alter its guardrails.
To be sure, Claude isn’t the problem, and Anthropic isn’t the villain. The company and product themselves aren’t the issue. The problem is that frontier AI models now attract two very different kinds of pressure simultaneously: illegal extraction by foreign actors who want to study and replicate their behavior, and lawful demands from domestic customers who want to reshape that behavior for their own missions.
Both forces operate within their own incentives. Both are real. And both create conditions that CISOs must factor into any decision to deploy these systems inside their enterprise.
Neutrality of frontier AI no longer exists
Frontier AI models no longer operate in a neutral space. They sit inside an environment where foreign actors are collecting information about and against them at scale, and where major domestic customers are attempting to steer their behavior for mission needs.
Neither dynamic makes Anthropic a villain, and neither makes Claude a compromised asset. What it does mean is that the geopolitical insulation these systems once enjoyed is gone. The environment around them has become part of the risk surface, and CISOs now have to account for pressures acting on the model long before it ever reaches their enterprise.
China’s extraction campaign: A targeting operation, not a curiosity
Anthropic’s disclosure that three China‑based AI companies (DeepSeek, Moonshot AI, and MiniMax) ran more than 16 million interactions through roughly 24,000 fraudulent accounts is not a story about model misuse. It is a story about targeting. These campaigns went straight at Claude’s most sensitive capabilities: agentic reasoning, tool use, and coding. That is not random sampling; that is structured collection.
I’ve spent enough time in the world of targeting to recognize this pattern immediately, and you don’t need my level of experience to see it. When an adversary can observe a system at scale, they can map its strengths, seams, and predictable behaviors. China now has that behavioral telemetry for Claude, and they will use it to tune their own systems and to shape offensive operations against environments where Claude‑like models are deployed.
And Claude is not the only system in China’s targeting sights. The same actors have used similar high‑volume extraction methods against other frontier models, including Google’s Gemini and OpenAI’s ChatGPT. They generate enough interaction data to understand how these systems think and where they can be pressured.
Anthropic’s callout does the entire community a service by raising the caution flag where it is both high and visible. The implication is straightforward: Frontier models are now intelligence surfaces.
US government pressure: Direct, immediate, and operationally significant
The pressure on the other side of Claude came from the US government, and it was direct.
Senior defense officials made clear they wanted the ability to direct Claude toward mission uses that would require altering or removing the guardrails Anthropic had put in place around autonomous weapons and broad‑scale surveillance. Anthropic CEO Dario Amodei responded with two concerns that matter for anyone responsible for risk: AI systems do not have the human fail-safe of refusing an improper order, and using AI to process the full stream of public conversation raises constitutional and civil‑liberties questions that the company was not willing to ignore. Those points explain why Anthropic declined.
The government’s reaction was swift. It announced that Claude would be removed from all government systems with a six‑month phase‑out and labeled Anthropic a supply‑chain risk.
The company’s own statement highlighted the tension: Claude was simultaneously described as a potential security liability and as a system important enough to warrant extraordinary measures to reshape its behavior.
For CISOs, the takeaway is not about who is right. It is that a frontier model already embedded in classified networks, intelligence workflows, and operational planning can be subjected to external pressure that would materially alter its behavior for every downstream customer.
Two pressures, one structural exposure
China’s extraction campaign and the US government’s direct pressure on Anthropic came from opposite directions and for entirely different reasons, but the operational effect is the same: both forces act on the model from outside the enterprise. Neither pressure says anything about the quality of the model or the integrity of the vendor. What it shows is that frontier AI has entered a phase where external actors are working hard to influence how these systems operate.
For CISOs, this is the point that matters. A model can be profiled, studied, or pressured long before it reaches your environment, and those upstream forces can shape how it performs once it is inside your ecosystem.
The risk is that any frontier model operating at this level of capability will draw the same attention and the same attempts to steer its behavior. The environment around these systems is now contested space, and that exposure travels with the model wherever it is deployed.
AI vendors’ response
Once the government announced its plan to remove Claude from federal systems, other vendors moved quickly to occupy the space. OpenAI was first out of the gate, publicizing a new arrangement to bring its model onto classified networks. Sam Altman later added a measured comment in a CNBC interview, noting his discomfort with heavy‑handed pressure on AI companies while still positioning OpenAI as a ready alternative. It was a clear signal: The opportunity was open, and OpenAI intended to take it.
xAI followed with its own approval for classified deployment, with Grok slated for initial rollout in early 2026. Elon Musk framed Anthropic in adversarial terms, but the rhetoric is secondary to the operational reality: The government wanted additional options, and the vendor ecosystem delivered them without hesitation.
For CISOs, the lesson is straightforward: When one supplier declines to adjust a model to meet a major customer’s expectations, another supplier will step forward immediately. The pressure doesn’t dissipate. The pressure shifts to the next model in line. That dynamic is now part of the operating environment for any enterprise relying on frontier AI.
The new operating reality
Frontier AI now sits inside an environment shaped by forces the enterprise does not control. Vendors are making decisions under those external pressures, and the effects travel downstream. None of these means the models are broken or untrustworthy. It means they are operating inside a landscape where external actors have leverage, intent, and visibility.
For CISOs, the adjustment is to treat these systems as high‑value dependencies exposed to upstream influence. The model you deploy is not just the artifact you receive; it is the product of the pressures acting on the vendor and the attention the model attracts once it demonstrates capability.
The task is to build enough visibility and monitoring to understand when those forces begin to show up in your own environment.
No Responses