Assured Reacts 16.04.2026

Assured Reacts: UK Government Writes Open Letter to UK Businesses on AI Risk in Wake of Anthropic’s Mythos Announcement

Mythos won’t just find vulnerabilities faster, it will change how quickly organisations need to respond

On 7 April, Anthropic unveiled its new frontier AI model, Claude Mythos Preview, alongside a cybersecurity initiative called Project Glasswing. It’s one of the clearest indicators yet that AI is materially reshaping cybersecurity, both with defensive advantages and significant new risks.

Governments and major institutions have taken the development seriously, prompting high-level discussions. Additionally, the UK Government has unveiled an open letter to UK businesses on AI threats.

Mythos is an AI model with advanced coding, reasoning and cybersecurity capabilities that can autonomously identify and exploit previously unknown (zero-day) vulnerabilities. It has already uncovered thousands of high-severity vulnerabilities in every major operating system and browser.

Crucially, Mythos can also chain exploits and generate working attacks, not just detect flaws. Because of these capabilities, Anthropic has not released the model publicly, instead restricting access to a small group of partners.

‘Project Glasswing’ is a cross-industry coalition (including AWS, Apple, Google, Microsoft, Palo Alto Networks and others) designed to use Mythos defensively, finding and patching vulnerabilities before they can be exploited.

What does this announcement mean?

  • AI is now capable of automating vulnerability discovery and exploitation at scale, potentially outpacing human teams.
  • This could make cyber attacks faster, cheaper and more sophisticated, while also giving defenders new tools to respond.
  • Anthropic itself frames this as a race to secure critical software before similar models become widely available.

How has industry reacted?

Many cybersecurity experts have expressed concern about misuse, with some warning of systemic risk if such tools proliferate. Others argue that claims may be overstated or partly marketing-driven.

Assured’s CISO, Nick Harris, reacts

“The open letter offers sensible advice, but it’s nothing new. For now, that’s fine. Models like Claude Mythos and ChatGPT 5.4 remain tightly controlled, limited to trusted partners, and not available for offensive use. The government guidance, while relevant for today, says very little about the future.

What’s worth paying attention to is the trajectory noted. The UK’s AI Safety Institute (AISI) notes that frontier model capabilities are now doubling every four months (down from every eight). Far beyond incremental progress, that’s a notable acceleration. If Mythos is the benchmark today, it won’t be for long.

For now, these capabilities are held by a handful of large vendors (a cohort of the world’s largest tech providers). The near-term reality is likely to be controlled deployment, with new features embedded into platforms like Microsoft Defender or CrowdStrike, wrapped in guardrails and positioned as defensive tooling.

But the economics are already shifting. Running these models at scale isn’t cheap. AISI testing suggests workloads of this kind can cost tens of thousands of dollars, with estimates of around $20,000 to uncover a single critical vulnerability like the OpenBSD bug. That keeps them out of reach for hobbyists and most bug bounty hunters, at least for now. It doesn’t, however, keep them out of reach for well-funded threat actors. State-backed groups and financially successful ransomware operations have both the capital and the incentive. And when you consider that these models can outperform humans in speed, scale, and persistence, with no fatigue and no downtime, it’s easy to see where this could go.

AI-driven vulnerability discovery becomes a service

The real risk emerges if and when access broadens, whether through leaks, replication, or cheaper alternatives. As we’ve already seen with models like DeepSeek, the cost barrier will fall. At that point, the ecosystem evolves and commoditisation occurs. AI-driven vulnerability discovery becomes a near real-time service, exploits become products, and the cybercrime supply chain becomes more efficient.

So far, however, most of the focus has been on vulnerability discovery. AISI has already acknowledged that Mythos can autonomously attack small, weakly defended environments once it has achieved initial access. Today, that comes with caveats like controlled lab conditions and limited defensive friction. But those constraints won’t last. Models will learn to evade detection, just as attackers always have. Defenders won’t stand still either. EDR and detection tooling will evolve to identify AI-driven attack patterns. The arms race continues, just at a decidedly faster pace.

Traditional application security tooling and vulnerability management may struggle to keep pace. Even automated penetration testing could be disrupted. If AI can continuously discover and exploit weaknesses, the model itself becomes the testing function. Which brings us to the real issue of response time. The window between vulnerability discovery and active exploitation is collapsing, so resilience depends on one thing: speed. To be more specific: Speed to patch, deploy and respond. This means near real-time patching, automated wherever possible, but controlled well enough to avoid self-inflicted outages.”

Latest articles

Be an insider. Sign up now!