The Trump-Xi summit opening in Beijing this week carries an agenda item unlike any in the history of US-China diplomacy: what to do about artificial intelligence that can autonomously find and exploit vulnerabilities in the world’s most critical software — and what happens when both superpowers have it.
Anthropic’s Mythos Preview, released last month to a limited group of security partners, has demonstrated the ability to discover zero-day vulnerabilities in every major operating system and browser, sometimes finding bugs that had survived decades of human review and millions of automated tests.
Anthropic has framed Mythos as a watershed moment, launching Project Glasswing and committing $100 million in usage credits to help defenders secure critical infrastructure before similar capabilities become widely available.
Meanwhile, the strategic buffer that Washington has long assumed it held over Beijing is narrowing faster than most policymakers have been willing to admit.
Chinese entities have sought access to Mythos and have so far been denied, but it’s likely the model will leak out. Mythos has already seen unauthorized access: a small group of users reportedly gained entry to the model on the same day Anthropic announced its limited release.
The gap that wasn’t
For years, the US-China AI competition was framed as an asymmetric contest — American labs in front, Chinese labs years behind, export controls and chip restrictions buying the United States time.
However, Stanford’s 2026 AI Index, published last month, found that US and Chinese models have traded the global performance lead multiple times since early 2025. As of March 2026, Anthropic’s Claude Opus 4.6 leads China’s best model by just 2.7 percentage points on a key benchmark — down from a gap of 17 to 31 percentage points in mid-2023.
China leads the world in AI patent filings, research citations, and industrial robot installations. While the Stanford index shows US private investment towering over Chinese private investment by a factor of 23, Chinese state guidance funds have deployed an estimated $912 billion across strategic industries over two decades, a figure that private-investment comparisons entirely miss.
The AI talent picture is, if anything, more alarming. The number of AI researchers moving to the United States has dropped 89% since 2017, with 80% of that decline occurring in just the past year, accelerated in part by the Trump administration’s H-1B restrictions.
The real threat isn’t who you think
The US-China framing — useful for budgets and political narratives — obscures the more consequential risk from systems like Mythos.
“The US and China may stand more to gain from controlling these systems together than simply fighting over them,” Gal Tal-Hochberg, co-founder and CEO of Beacon Security and former CTO of cybersecurity venture firm Team8, tells CSO. “When it becomes very cheap to do damage, people who are not anybody can suddenly do damage.”
The danger is not simply that Beijing develops models with advanced cyber capabilities. The larger concern is what happens when those capabilities — or something close to them — diffuse into criminal ecosystems, ransomware operations, or loosely affiliated proxy groups that no government controls.
“The cost and complexity of attacks are dropping,” Tal-Hochberg says. “Defenders who could previously get away with being less sophisticated may no longer be able to.”
A nuclear analogy that has limits but holds
Senior US officials, speaking ahead of the summit, said Washington is prepared to “explore channels of deconfliction” with Beijing on AI, using language that evokes Cold War nuclear logic.
What those channels might look like is taking shape. Both sides are weighing a recurring set of conversations focused on establishing guardrails covering AI models behaving unexpectedly, autonomous military systems, and nonstate actors using powerful open-source tools.
Both AI and nuclear weapons involve technologies of potentially catastrophic offensive capability. Both involve two rival powers who share an interest in preventing the worst outcomes even as they compete. And in both cases, the US has to decide whether dialogue with an adversary is a concession or a necessity.
But the nuclear analogy breaks down in one critical way: Nuclear weapons were economically irrelevant outside defense ecosystems. AI is the opposite. It is simultaneously the most significant general-purpose economic technology of this era and a potentially destabilizing offensive capability.
As Tal-Hochberg puts it, “AI is both nuclear power and nuclear weapons at the same time. Governments want the economic benefits while also trying to limit the offensive risks.”
That dual nature makes agreed limits enormously harder to negotiate, verify, or enforce. The Council on Foreign Relations has argued that Beijing’s actual interest in AI safety dialogue is primarily instrumental — an opportunity to close the capability gap rather than constrain it.
The sole prior US-China AI safety dialogue, held in 2024, illustrated the asymmetry: The US sent technical experts to outline shared risks; China sent diplomats to complain about chip export controls.
What Washington still hasn’t decided
The more uncomfortable truth is that the United States has not yet resolved what Mythos-class systems actually are. The Trump administration spent much of the past year resisting broad AI regulation, arguing that oversight would blunt competitive advantage.
Now it faces growing internal pressure to develop testing requirements for frontier models with advanced cyber capabilities and is reportedly preparing executive action on AI safety, a significant pivot from its earlier posture.
Anthropic’s decision to release Mythos through Project Glasswing — giving defenders access before offensive capabilities become broadly available — represents one model for managing this problem.
And the access decisions themselves are already becoming geopolitical. The EU, notably, has still not been granted access to Mythos, even as OpenAI has moved to provide European cybersecurity teams access to its own cyber model. These are unilateral judgments by private companies, not a policy framework.
The window won’t stay open for long
More capable AI can accelerate the development of still more capable AI. The country — or company, or actor — with the strongest models today has structural advantages in building tomorrow’s models.
What Washington and Beijing discuss in Beijing this week will not resolve the fundamental tension between competitive AI development and the risks that development creates. But the establishment of even narrow deconfliction channels — a hotline logic for AI crises, shared norms around the most dangerous applications, transparency mechanisms that let each side verify the other isn’t crossing agreed lines — would represent meaningful progress over the current state, which is no framework at all.
No Responses