find-partner-btn-inner

Project Glasswing: what company boards need to know about Claude Mythos

1. Introduction and Executive Summary

On 7 April 2026, Anthropic announced Claude Mythos Preview, a new AI model so capable at finding software security flaws that the company declined to release it publicly. Instead, Anthropic restricted access to a small group of partner organisations (including Microsoft, Google, Apple and Amazon) through a controlled programme called Project Glasswing. The announcement prompted the US Federal Reserve to brief major bank CEOs on the model's potential cyber risks, and drew the attention of the UK's AI Safety Institute (AISI), the Bank of England and EU regulators.

The core claim is striking: in seven weeks of testing, Mythos autonomously discovered more than 2,000 previously unknown ("zero-day") vulnerabilities across every major operating system and web browser, some undetected for decades. Other frontier models, OpenAI's GPT-5.4-Cyber and Google's Big Sleep, have comparable features, but Mythos represents a step change in speed, scale and autonomy. For boards, the practical question is whether existing patch cycles, supplier risk frameworks and incident-response plans remain defensible against AI-compressed attack timelines.

This alert addresses, in summary: (i) what Mythos can and cannot do; (ii) why much of the capability lies in the surrounding workflow rather than the model; (iii) what this development signals for the threat landscape; (iv) the key UK, EU and international obligations engaged; and (v) some practical steps boards and in-house counsel should now consider.

2. Separating Myth from Reality

What Mythos actually does

Mythos is a general-purpose AI model, not a purpose-built hacking tool, that happens to be exceptionally good at finding and exploiting software security flaws. Designed as an advanced software engineering assistant capable of understanding vast and complex codebases, it combines four attributes that set it apart: it can analyse an entire codebase at once; learn from its own mistakes and retry without human guidance; interact directly with debugging tools and live systems; and autonomously form and test theories about where flaws may exist.

In testing, Mythos found critical flaws in every major operating system and browser, including a 27-year-old bug in OpenBSD, widely regarded as one of the most security-hardened operating systems in existence. It also demonstrated the ability to chain several smaller flaws into a single, high-impact attack, to reverse-engineer software whose source code is not publicly available, and to escape software "sandboxes" (isolated environments designed to contain malicious activity).

What is overstated

It is important, however, to maintain perspective. The cybersecurity community is divided on the true severity of the threat. As Ciaran Martin, former chief executive of the UK's National Cyber Security Centre, observed: "It's a big deal, but it's unlikely to prove to be the end of the world". Jeff Williams, chief technology officer of cybersecurity firm Contrast Security, has been similarly sceptical about whether Anthropic’s restricted release will hold the line, observing that “Anthropic is holding back access to this particular model, but I don’t think they’re really going to be able to defend that line”. There is also an element of commercial incentive in the dramatic framing of the announcement: as Mr Martin has observed, it is rare for any organisation “to suffer commercial detriment by predicting calamity”. OpenAI’s Sam Altman, for instance, has characterised elements of Anthropic’s promotion of Mythos as “fear-based marketing”.

3. The System, not the Model

One of the most important, and least reported, findings since Mythos was announced is that much of what makes it powerful lies not in the raw intelligence of the model, but in the orchestration layer, scaffolding and automated workflows around it. That view is shared by independent voices including the AI cybersecurity firm AISLE, the security technologist Bruce Schneier, and the AISI itself.

AISLE tested the proposition directly: it took the vulnerabilities Anthropic had showcased, isolated the relevant code and fed it to a range of small, inexpensive, freely available models. Eight out of eight correctly identified Mythos’s flagship FreeBSD vulnerability and assessed it as critical, and a small open-source model independently identified the core logic of the 27-year-old OpenBSD bug. Bruce Schneier reached a similar conclusion, noting that AISLE had replicated the vulnerabilities “using older, cheaper public models” and characterising the surrounding publicity as “very much a PR play by Anthropic”.

The implications for defenders are profound. The good news is that the capability that matters most, the automated workflow, targeting logic, verification and triage, is not locked behind a single proprietary model and can be built using a range of available AI tools. The bad news is that the same is true for attackers, who can assemble comparable pipelines using freely available, open-source models, potentially beyond the reach of any safety controls. As AISLE has put it, a thousand adequate automated detectives searching broadly will find more vulnerabilities than one brilliant model restricted to a narrow search space. Mr Schneier highlights an important caveat: at present, “finding for the purposes of fixing is easier for an AI than finding plus exploiting”, giving defenders a temporary advantage that is likely to narrow as more capable models reach the public. The AISI has noted a related limitation, smaller models can be highly sensitive at finding vulnerabilities but markedly less reliable at distinguishing those already patched, a triage gap that places a premium on the surrounding system, not just the model.

4. What Mythos signals for the future of Cybersecurity

The threat landscape is changing - rapidly

Anthropic’s frontier red team lead, the in-house team responsible for adversarial testing of the company’s most advanced models, has indicated that comparable capability could become broadly available within six to twenty-four months, including to hostile nation-states and criminal enterprises. Other frontier models already have some comparable capabilities, the cost and expertise required to launch sophisticated attacks will continue to fall, and 87 per cent of global organisations report having experienced an AI-powered cyberattack in the past year (SoSafe, Cybercrime Trends 2025).

In practical terms, the time between discovering a software flaw and turning it into a working attack, historically days or weeks of skilled human effort, can now be compressed to hours or minutes. AI-generated phishing emails (fraudulent messages designed to harvest credentials or trigger malicious clicks) now achieve click-through rates of around 54 per cent, against 12 per cent for traditional campaigns, a 4.5-fold uplift in attacker success, and the barrier to entry is also falling: a person with limited technical background could potentially use an AI model to identify and exploit serious software vulnerabilities.

What this means for defenders

Conventional defensive tools are under strain. Signature-based detection struggles against novel exploits it has never seen; traditional vulnerability scoring and prioritisation processes were not designed for the volume of critical findings AI-assisted discovery can produce. And software patching cycles were already struggling to keep pace before AI accelerated the attacker's side of the equation.

This does not mean that defence is futile. As both the AISI and Bain & Company have emphasised, organisations with well-hardened defences and strong cybersecurity fundamentals remain significantly more resilient. The challenge is that most organisations have not yet built those foundations to the standard now required.

The acute risk to operational technology environments

Organisations with significant operational technology (OT) environments, the systems that control physical processes such as power generation, water treatment, manufacturing lines and transport networks, face a particularly acute challenge. Many OT systems are decades old, designed for reliability rather than security, and cannot easily be patched, either because patches do not exist or because applying them would risk disrupting critical operations. The progressive convergence of IT and OT networks, driven by efficiency and real-time data, has connected once-isolated systems to corporate networks and the internet, materially expanding the attack surface. The exposure is borne out by recent industry data: SANS reports that around 22 per cent of OT organisations experienced a cybersecurity incident in 2025, of which 40 per cent caused operational disruption, whilst Resilience, a US-headquartered cyber-insurance and risk-management firm, records a 61 per cent year-on-year rise in ransomware attacks on manufacturers, against 46 per cent across all sectors.

Given Mythos's demonstrated ability to find vulnerabilities autonomously in aged and complex codebases, OT environments are especially exposed. Where patching is not feasible, the operating model must shift towards compensating controls: strict network segmentation isolating OT from corporate IT; OT-specific anomaly detection; rigorous restrictions on internet-facing exposure; and tested response and recovery plans for severe but plausible cyber scenarios. Under the NIS2 Directive in the EU and the UK's NIS Regulations 2018, operators of essential services in energy, water, transport and healthcare are already subject to cybersecurity risk-management and incident-reporting obligations that extend to OT environments, and those obligations will be broadened in the UK by the Cyber Security and Resilience (Network and Information Systems) Bill (the “Cyber Security & Resilience Bill”), which was introduced to Parliament on 12 November 2025 with Royal Assent expected later in 2026.

5. Legal and Regulatory Implications

The following summarises the key obligations and decision points for in-house counsel.

UK obligations

Data breach notification (UK GDPR / DPA 2018)

An AI-enabled attack compromising personal data triggers the standard 72-hour ICO notification, and individual notification where there is a high risk to data subjects. The critical question is whether existing security measures, in particular patch cycles and incident response playbooks, still satisfy Article 32 UK GDPR’s requirement for "appropriate technical and organisational measures" given the accelerated threat. If a regulator later determines that faster patching was reasonably achievable, the adequacy defence weakens.

Computer Misuse Act 1990 (CMA)

The CMA creates four principal offences: unauthorised access; unauthorised access with intent; unauthorised acts impairing the operation of a computer; and unauthorised acts causing, or creating significant risk of, serious damage, the most serious of which, those causing damage to human welfare or national security, carry a maximum sentence of life imprisonment. Two practical points arise. First, the Act’s intent requirements create attribution difficulties where an AI model (rather than a human) is the proximate actor, making prosecution of AI-assisted attacks harder. Secondly, the CMA still offers no statutory defence for legitimate security research, although the UK government has committed to introducing one; until it does, ensure explicit written authorisation covers all AI-assisted defensive testing.

NIS Regulations / Cyber Security and Resilience Bill

The NIS Regulations 2018 impose risk-management and incident-reporting duties on operators of essential services and digital service providers. The Cyber Security & Resilience Bill, currently before Parliament (introduced 12 November 2025; Royal Assent expected later in 2026), will broaden their scope, introduce two-stage incident reporting and strengthen supply chain oversight, with phased implementation through secondary legislation likely by 2028. In-house teams should confirm whether their organisation (or any key suppliers) falls within scope and prepare for tighter reporting deadlines.

EU and international obligations

NIS2 Directive

NIS2 covers essential and important entities across 18 sectors (including energy, transport, banking, healthcare and digital infrastructure) and imposes a three-stage reporting obligation: 24-hour early warning, 72-hour full notification and one-month final report. Fines reach €10 million or 2 per cent of global turnover for essential entities, and €7 million or 1.4 per cent for important entities. Members of the management body can face personal accountability for cybersecurity governance failures, including potential temporary suspension from managerial roles in essential entities. In-house counsel should ensure board-level awareness of this exposure and verify that incident response plans can meet the 24-hour deadline.

EU AI Act

The AI Act (Regulation (EU) 2024/1689) does not ban offensive cybersecurity models but engages two relevant tiers. Providers and deployers of high-risk AI systems (the Annex III categories, including biometrics, critical infrastructure, employment, essential services and law enforcement, and AI safety components in regulated products) face significant risk-management, documentation, human-oversight, conformity-assessment, registration and incident-reporting obligations. Mythos itself, as a frontier general-purpose AI (“GPAI”) model, will almost certainly fall within the GPAI-with-systemic-risk category, requiring Anthropic to evaluate and mitigate systemic risks, report serious incidents and protect model weights. The phased timetable now bites: GPAI rules have applied since August 2025 and the high-risk Annex III obligations are due to apply from 2 August 2026, although the Commission’s November 2025 Digital Omnibus proposes deferring that deadline by up to 16 months. Until that proposal is enacted, 2 August 2026 should be treated as the operative deadline. In-house counsel should classify in-scope AI systems against Annex III, verify GPAI compliance by AI security vendors and stand up deployer obligations now rather than wait for any extension.

US position

No single federal cybersecurity statute applies; obligations arise from sector-specific regimes (financial services, healthcare, critical infrastructure). The Federal Reserve's decision to brief bank CEOs on Mythos signals regulatory concern, but it has not yet translated into new binding rules.

Cross-border notification complexity

A single AI-enabled breach may simultaneously trigger UK GDPR (72 hours), NIS2 (24 hours), EU GDPR (72 hours) and US state requirements, each with different thresholds and content expectations. Incident response plans should therefore be pre-mapped to these parallel timelines so that the legal team is not scrambling during a fast-moving incident.

Financial services - sectoral regulators

Financial-services clients are subject to a developed operational resilience regime that amplifies the implications of an AI-accelerated cyber threat. Under the FCA’s PS21/3 and SYSC 15A, the PRA’s SS1/21 and (in the EU) the Digital Operational Resilience Act (“DORA”), in-scope firms must identify their important business services, set impact tolerances and demonstrate through severe but plausible scenario testing that they can remain within those tolerances; DORA adds aligned ICT risk-management, incident-reporting and third-party oversight obligations, including the new EU oversight framework for critical ICT third-party providers. The FCA’s March 2026 review emphasises that resilience is a continuing obligation, with boards expected actively to challenge their firm’s exposure to severe cyber attacks and third-party concentration risk; firms should also monitor the FCA’s and PRA’s consultations on operational incident and material third-party reporting, which are likely to compress timelines further.

Commercial and IP action points

Supply chain contracts

NIS2 and the Cyber Security & Resilience Bill both require organisations to assess and manage supplier cybersecurity risk. In-house teams should audit existing supply agreements for: the adequacy of cybersecurity obligations and audit rights; breach notification timelines that align with the organisation's own regulatory deadlines; and liability allocation for losses caused by AI-enabled supply chain compromise.

Liability and insurance

Review limitation of liability clauses and cyber insurance policies against the specific scenario of AI-accelerated zero-day exploitation. Consider whether "reasonable security" standards in existing contracts remain defensible given publicly available information about the evolving threat, and whether existing force majeure or exclusion clauses contemplate this category of attack.

IP ownership of AI-generated code

Where AI tools generate exploit code (offensively or defensively), ownership is uncertain. Under section 9(3) of the Copyright, Designs and Patents Act 1988, copyright in a computer-generated work belongs to the person who made the arrangements for its creation, but the application of this provision to highly autonomous AI outputs is untested and is the subject of active UK Intellectual Property Office consultation. Ensure that contracts with AI security vendors expressly address ownership and permitted use of AI-generated outputs.

6. Key Takeaways and Actions

In light of the developments outlined in this alert, organisations should consider the following steps:

  • Assess cybersecurity foundations
    The most effective defence against AI-enabled attacks is not a new generation of AI-specific security tools but properly built cybersecurity fundamentals: robust access controls, network segmentation, zero-trust architecture, automated patching and anomaly detection. Independent testing by the AISI confirmed that Mythos cannot reliably execute autonomous attacks against well-hardened defences.
  • Review and shorten patch management cycles
    The compression of the time between vulnerability disclosure and exploit availability means that existing patch management timelines may no longer be adequate. Organisations should drive down the time-to-deploy for security updates, enable auto-updates wherever possible, and treat dependency updates that carry CVE (Common Vulnerabilities and Exposures) fixes as urgent rather than routine.
  • Audit supply chain cyber resilience
    Assess the cybersecurity and operational-resilience posture of key suppliers, vendors and third-party software providers, with particular attention to their capacity to detect, contain and recover from AI-enabled threats. Embed AI-specific requirements into supplier due diligence, contractual cyber and reporting obligations and ongoing monitoring, and map material third-party arrangements to the organisation’s own important business services and impact tolerances so that supplier disruption can be identified, escalated and recovered within tolerable timeframes.
  • Review contractual and insurance provisions
    Ensure that limitation of liability clauses, indemnification provisions and cyber insurance policies are adequate to address the risk of AI-enabled attacks, including zero-day exploitation and compressed attack timelines.
  • Prepare for multi-jurisdictional breach notification
    Map breach notification obligations across all applicable jurisdictions (UK GDPR, NIS Regulations, EU GDPR, NIS2 and sector-specific regimes) and ensure that incident response plans can meet the shortest applicable deadline, currently 24 hours under NIS2.
  • Deploy AI as part of the defensive stack
    This is perhaps the single most important tactical shift available to organisations now. Even frontier models far less powerful than Mythos are already effective at identifying vulnerabilities in an organisation’s own systems before attackers find them, and, as AISLE's research shows, much of that capability can be replicated using inexpensive, open-source models wrapped in well-designed automated workflows. Bain & Company recommends a dedicated "AI threat war room" using the same AI tools attackers will use to probe their own systems. The logic is straightforward: if AI compresses the attacker's timeline from weeks to hours, manual-only defence cannot keep pace.
  • Brief the board
    Cybersecurity must be treated as a critical board-level issue. Under NIS2, members of the management body can face personal accountability for cybersecurity failures, and UK directors’ general duties under section 172 of the Companies Act 2006 require active consideration of the long-term risks (including cyber risks) facing the company. Boards should understand the implications of AI-enabled threats, the adequacy of current defences and the organisation's regulatory exposure.
  • Monitor legislative developments
    The legal landscape is evolving rapidly. In the UK, the reform of the Computer Misuse Act and the passage of the Cyber Security & Resilience Bill, currently in the later stages of its parliamentary passage, with Royal Assent expected later in 2026,should be tracked closely. At EU level, the phased application of the AI Act, the proposed amendments to NIS2 and the proposed amendments to the EU Cybersecurity Act will all shape the compliance obligations of organisations operating in or serving the European market.
  • Invest in AI capability within the legal function
    The same AI advances driving the threat landscape can be deployed defensively by in-house legal teams, for contract review, compliance monitoring and regulatory horizon scanning. Teams that treat AI as a force multiplier for human judgement, rather than a replacement for it, will be best placed to keep pace with the speed of change.

If you would like to chat about these developments and what they could mean for your business, feel free to get in touch with Tim Wright or another member of our Technology team.

Featured Insights