Lila PrescottApr 24, 2026 5 min read

Unauthorized Group Accessed Anthropic's Unreleased Claude Mythos AI

Claude Mythos AI
Adobe Stock

An unknown group has reportedly been accessing Claude Mythos — Anthropic's AI model the company says is too dangerous to publicly release — without authorization through a third-party contractor environment, Anthropic confirmed Tuesday. The company is now investigating the apparent security breach, which a Bloomberg report says has been ongoing since early April.

Anthropic Confirms the Investigation

Anthropic issued a statement to Bloomberg acknowledging the breach: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The confirmation came after Bloomberg reporters viewed a live demonstration of the unauthorized access and examined screenshots provided by a member of the group responsible.

Claude AI app
Adobe Stock

Claude Mythos has been described by Anthropic as its most capable and most dangerous AI system to date — a model the company decided not to publicly release due to safety concerns. That decision has drawn significant attention from governments and technology researchers around the world who have pressed Anthropic for more transparency about the system's capabilities.

The breach is a significant embarrassment for Anthropic, which has positioned itself as one of the most safety-focused AI developers in the industry. The company has publicly argued that responsible development requires restricting access to the most powerful AI systems, a policy now undermined by the apparent vulnerability in its contractor infrastructure.

How the Group Got In

According to the Bloomberg report, the sequence of events leading to the breach began with a Discord group dedicated to tracking unreleased AI models. Members used bots to monitor GitHub repositories for any clues about models not yet made public. A data security incident at Mercor, an AI training startup, proved to be a critical piece of the puzzle.

An anonymous member of the group told Bloomberg they work for an Anthropic third-party contractor. By combining information from the Mercor breach with their own contractor-level access, and using what Bloomberg described as "commonly used internet sleuthing tools often employed by cybersecurity researchers," the group was able to determine the location of the Claude Mythos Preview environment.

The group gained that access on April 7 — the same day Anthropic announced Project Glasswing, its initiative to help organizations identify cybersecurity vulnerabilities. The timing underscored a stark irony: the company was publicly launching a security-focused program on the same day an unauthorized group was quietly accessing its most restricted model.

The Group's Claims and What They Did With the Access

Claude AI desktop laptop
Adobe Stock

The anonymous source who spoke to Bloomberg was careful to downplay the group's intentions. They said the group is "interested in playing around with new models, not wreaking havoc with them." Members have reportedly used their access primarily to test Claude Mythos's capabilities and conduct what they described as informal experiments.

Whether to take that claim at face value remains an open question. The group has had unrestricted access to one of the world's most powerful AI systems for over two weeks, with no apparent oversight from Anthropic. Cybersecurity experts and AI researchers have long warned that accountability in tech requires more than self-reported good intentions from unauthorized actors.

Claude Mythos and Why the Stakes Are High

Anthropic's decision to withhold Claude Mythos from public release was unusual in an industry that typically races to ship new models. The company has said the system scored at levels that concerned its own safety researchers, though specific capabilities have not been disclosed. Multiple foreign governments and cybersecurity agencies have reportedly sought briefings on the model.

The White House has also been monitoring developments around Mythos. In recent weeks, the Trump administration signaled a warming relationship with Anthropic and suggested the model could have applications across federal agencies, raising additional concerns about what a broader rollout of the technology might look like if proper security controls cannot be maintained even in restricted preview environments.

The breach highlights a fundamental tension in how AI companies manage advanced models before public deployment. Contractor networks expand the attack surface for bad actors, and even well-intentioned sleuthing creates risks if the information gained is later misused or falls into other hands. Anthropic has not said whether it has revoked the group's access or identified all members involved.

What Comes Next

Claude desktop AI app
Adobe Stock

Anthropic's statement confirmed an active investigation but offered no details about remediation steps or timeline. The company has not disclosed whether law enforcement has been contacted or whether it plans to pursue legal action against those responsible for the unauthorized access.

The incident adds a new dimension to the ongoing debate over how much transparency AI developers owe the public when it comes to their most powerful systems. Among the broader disruptions reshaping the technology industry in 2026, the breach stands out for what it reveals about the gap between a company's stated commitment to AI safety and the practical realities of securing the most sensitive models from unauthorized access.


Curious for more stories that keep you informed and entertained? From the latest headlines to everyday insights, YourLifeBuzz has more to explore. Dive into what's next.

Explore by Topic