Unauthorized Group Gains Access to Anthropic’s Mythos AI

Tags:

Anthropic is investigating reports that an unauthorized group gained access to its restricted Mythos AI cybersecurity tool just days after its limited release in April 2026. 

“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson said.

The tool was shared with a small group of enterprise partners, including Apple and Goldman Sachs, and was not intended for public use. Reports noted a handful of users accessed Mythos through a contractor-linked system and have been using it since its launch, raising concerns about its capabilities to identify vulnerabilities and simulate cyberattacks.

Access linked to a third-party environment

According to several reports, Anthropic is investigating claims that a private online group accessed Mythos through a third-party vendor system and has found no evidence that the activity extended beyond that environment or affected its internal systems.

The Guardian noted that a handful of users gained access on the same day Mythos was introduced to select partners. 

TechCrunch also stated that the group used a range of methods to gain access, including relying on the “access” of a person interviewed by Bloomberg who works for a third-party contractor supporting Anthropic.

Members of the group are part of a private Discord community that seeks out unreleased AI models. Bloomberg reported that they have been using Mythos since gaining access and provided screenshots and a live demonstration to verify their claims.

A high-risk AI tool under scrutiny

Mythos is part of Anthropic’s Project Glasswing initiative, which restricts access to a limited group of enterprise partners. The company has emphasized the model’s dual-use nature. While it can help organizations identify vulnerabilities, it could also allow attackers to exploit them. 

Regulators have raised concerns about the model’s potential misuse, even as the UK’s AI Security Institute has vetted Mythos and described it as a step forward in cyber capability, according to The Guardian.

In testing, Mythos completed a 32-step simulated cyberattack in several attempts, a task that would typically take human professionals days. 

UK AI minister Kanishka Narayan said businesses “should be worried” about the model’s ability to uncover vulnerabilities that attackers could exploit.

Vendor risk draws global attention

The reported access highlights ongoing challenges in managing third-party risk, especially as AI systems become more powerful and widely deployed. Even when core systems remain secret, vendor environments can introduce exposure points. 

Financial Review highlighted that Anthropic said it had no evidence that the incident extended beyond the vendor system. “AI labs commonly use third-party contractors for tasks such as model testing, although it was not clear which vendor was involved in the incident,” the publication added. 

Global regulators are also monitoring the situation. The Reserve Bank of Australia said it is engaging with regulators and government agencies to assess the implications for financial system resiliency

Anthropic’s investigation is ongoing, and the full scope of the access remains unclear. The outcome may influence how AI companies handle controlled releases of high-risk models.

Also read: NSA reportedly uses Anthropic’s Mythos AI despite a “supply chain risk” designation.

The post Unauthorized Group Gains Access to Anthropic’s Mythos AI appeared first on eWEEK.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *