NSA Reportedly Uses Anthropic’s Mythos AI Despite ‘Supply Chain Risk’ Designation

Tags:

The National Security Agency (NSA) is reportedly using Anthropic’s new most powerful AI model, Mythos Preview, despite the company currently being designated a “supply chain risk” by the NSA’s parent organization, the Department of Defense (DoD).

The leaked report indicates that military use of Anthropic’s model is actually widening due to the potential capabilities of Mythos. The NSA is using the AI model to scan its own environments for potential security vulnerabilities.

First reported by Axios, sources claim that the NSA was among 40 organizations granted restricted access to Mythos. This is despite Anthropic gearing up to fight the DoD in court over the supply chain designation, and the DoD claiming that use of Anthropic’s tools threatens national security.

It would not be the first report of the DoD blatantly flaunting its own supply-chain designation. In preparation for the war in Iran, the department reportedly used Anthropic’s AI model to process large volumes of data, summarise documents, and organize logistical data.

The Trump administration may have changed its stance, at least internally, on Anthropic after an early briefing of Mythos’ capabilities. At a time when AI model sophistication appears to be accelerating across tasks such as software vulnerability detection, software engineering, and data analysis, having access to the most advanced models is critical.

The Mythos model and its potential impact

Around 40 organizations gained access to Mythos, forming the Project Glasswing partnership to use these AI models to identify and fix vulnerabilities before they can be exploited. Alongside several governments, major infrastructure companies such as Amazon, Microsoft, Apple, and Nvidia were granted access, along with a select group of financial institutions.

Early testing and reports suggest that the Mythos model is a step up from previous systems in terms of vulnerability detection and exploit identification, posing a high risk to legacy systems used by financial institutions.

In an announcement on Anthropic’s latest update to its Opus model, the company listed a set of benchmarks that included Mythos. In these, Mythos outperformed Opus and the latest models from OpenAI and Google Gemini in agentic coding, multidisciplinary reasoning, and visual reasoning, indicating a broad leap in capabilities.

Whether this holds up remains to be seen, and it is unclear if Anthropic will ever launch Mythos in its current form. In the announcement of Opus 4.7, the company said it had toned down the model’s cyber capabilities and strengthened guardrails. OpenAI has also reportedly restricted GPT-5.4-Cyber to a small group of organizations, citing similar concerns around its cyber capabilities.

Limiting access to these AI models to a select group may shield companies like Anthropic from potential litigation, but it could also draw criticism for providing government departments with highly advanced systems. If experts in AI safety, security, and proficiency cannot access these models, their true effectiveness and potential risks to broader internet users remain unclear. 

The UK AI Security Institute has reportedly gained access, but wider distribution to independent safety and security bodies may be needed to ensure ongoing scrutiny.

Also read: Anthropic’s latest Claude 4.7 release offers more context on how quickly the company’s most advanced AI systems are evolving.

The post NSA Reportedly Uses Anthropic’s Mythos AI Despite ‘Supply Chain Risk’ Designation appeared first on eWEEK.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *