
Anthropic did something on Monday that no AI company has done before: it announced a model too dangerous to release — and then handed it to Apple, Microsoft, Google, Amazon, CrowdStrike, and seven other organizations to go find bugs in their own code. The model is called Claude Mythos Preview. The initiative is called Project Glasswing. And if the numbers Anthropic is publishing are real, the cybersecurity industry just woke up in a fundamentally different world. Here's our Deep Dive:
The consensus view on AI and cybersecurity has been a comfortable equilibrium: AI makes attackers a little faster, defenders a little smarter, and the arms race continues at roughly the same tempo. Vendors sell more product. CISOs buy more tools. The market grows. Crowdstrike trades at ~18x forward revenue. Palo Alto Networks at 13x. The thesis has been that AI is a rising tide for the whole sector.
Mythos Preview broke that thesis in about three weeks.
First, understand what this model is. Mythos is reportedly Anthropic's largest model ever — roughly 10 trillion parameters, six times the size of any previous frontier model. It uses a mixture-of-experts architecture, meaning only a fraction of that capacity fires on any given query. But when the full machine turns its attention to a codebase, the results have been unlike anything the security community has seen.
What Mythos actually did. According to Anthropic's own system card and the Project Glasswing announcement, the model autonomously discovered thousands of zero-day vulnerabilities across every major operating system and every major web browser. Not theoretical weaknesses. Working exploits — the kind that let an attacker walk through the front door of a system and take over.
Consider the scale of what it found. A flaw in OpenBSD that had been hiding in plain sight for 27 years — through every security audit, every code review, every automated scan the open-source community could throw at it. A bug in FFmpeg, the video software that quietly powers half the internet, that automated testing tools had run past 5 million times without catching. A hole in FreeBSD's file-sharing system, 17 years old (CVE-2026-4747), that could have let an unauthenticated attacker gain complete control of a machine from across the internet.
And then the one that should keep people up at night: Mythos found four separate, unrelated vulnerabilities in a web browser, figured out how to chain them together into a single attack, and used the combination to punch through two layers of security sandboxing. No human guided it. No one told it which bugs to connect. It read the code, saw the path, and built the weapon.
Anthropic says Mythos produced 181 working exploits against Firefox's JavaScript engine. The previous model, Claude Opus 4.6, managed 2 across several hundred attempts. The cost per major vulnerability discovery reportedly ranged from $50 to $2,000. For context, a single iOS zero-day on the legitimate broker market commands $1.5 to $2.5 million. On criminal forums, prices have reached $20 million.
Mythos reportedly finds them for the cost of a nice dinner. And here's the detail that makes all of this more unsettling: Anthropic says these security capabilities weren't the point. Nobody trained Mythos to hunt for exploits. The vulnerability discovery was, as VC Tomasz Tunguz put it, "collateral output — a byproduct of optimizing for something else entirely." The model just got good enough at reading code that breaking it became trivial.

The distinction that matters isn't that an AI can find bugs. It's that the economics of vulnerability discovery just collapsed. The entire cybersecurity value chain — from bug bounty platforms to vulnerability management vendors to the shadowy zero-day broker market — is built on the assumption that finding critical vulnerabilities is expensive, time-consuming, and requires rare human expertise. Mythos appears to have the potential to commoditize the scarce input that the entire industry is structured around.
This creates what Picus Security calls the Glasswing Paradox: the thing that can break everything is also the thing that fixes everything. But breaking and fixing don't operate on the same clock. An attacker with Mythos-class capabilities could identify and weaponize a vulnerability in hours. A defender who receives the same disclosure still has to write the patch, test it against every system it touches, schedule a maintenance window, and pray nothing breaks in production. Attackers move at machine speed. Defenders move at calendar speed. As Picus notes, fewer than 1% of the vulnerabilities Mythos identified in its initial sweep have been patched. Not because defenders are lazy — because the entire remediation apparatus of modern IT was built for a world where critical bugs trickle in. They're now arriving by the thousand.
The people on the front lines already feel the shift. Greg Kroah-Hartman, the Linux kernel's chief maintainer, told The Register that AI-generated vulnerability reports went from "slop" to legitimate overnight — "the world switched," he said. Daniel Stenberg, the maintainer of curl, one of the most widely deployed pieces of software on earth, now reportedly spends hours per day processing AI-generated vulnerability reports. The flood has already started. And that's before Mythos reaches the open-weight frontier.
Alex Stamos, the former Facebook and Yahoo security chief now at Corridor, told Platformer that the industry may have "something like six months before the open-weight models catch up to the foundation models in bug finding." Six months. That's the window. After that, this isn't a capability that stays locked inside Project Glasswing's 40-partner consortium. It suggests every organization — including the ones that haven't patched those first Mythos disclosures — may soon need to operate in a world where vulnerability discovery is cheap and abundant.

Start with Anthropic itself. The company closed a $30 billion Series G-1 at a $380 billion valuation in February. Recent secondary market transactions have reportedly implied valuations approaching $600 billion, with Rainmaker Securities reportedly calling it the most difficult stock to source due to a lack of sellers. Separately, Mythos represents a meaningful enterprise play: $100 million in committed usage credits to Glasswing partners, pricing at $25/$125 per million input/output tokens, and an enterprise engagement pipeline that now reportedly spans every major infrastructure company on earth.
For the publicly traded cybersecurity names that overlap with the secondary market universe, the reaction has been instructive. When Fortune first surfaced the leak in late March, CRWD dropped 7% and PANW fell 6% — the market's initial read was that AI vulnerability discovery disrupts security vendors. When Project Glasswing officially launched and both companies were named as founding partners, CrowdStrike surged 6.2% and Palo Alto gained nearly 5%. JPMorgan reiterated overweight ratings on both, with analyst Brian Essex calling them "essential layers in the defensive stack" and naming Palo Alto the bank's top cybersecurity pick. The market's revised read: being inside the tent is the moat.
Tunguz frames the dynamic bluntly: "Access becomes kingmaking."CrowdStrike can now reportedly scan for zero-days its competitors cannot find. Apple can harden its software while others cannot. That isn't a product feature — it's a structural advantage that compounds daily. And as Tunguz notes, Glasswing's stated purpose is defensive, but "that distinction won't hold forever."
That bifurcation — inside the Glasswing coalition versus outside it — could prove to be one of the more significant structural dynamics in cybersecurity since the shift to cloud-native architectures. The 40 organizations with access to Mythos Preview are operating with a fundamentally different defensive toolkit than everyone else. For the thousands of companies that aren't in the consortium, the vulnerability disclosure pipeline is about to start flooding their inboxes with patches they aren't staffed to apply at the speed the threat landscape now demands.
The counterargument is that this is, at some level, the most sophisticated product launch in AI history. Critics have noted the convenient timing: Anthropic is reportedly evaluating an IPO as early as October 2026. Announcing a model "too dangerous to release" — and then releasing it exclusively to the world's most valuable companies — is, as one observer put it, "the familiar dream of self-regulation" dressed up in a lab coat. The cybersecurity capabilities are real, but the governance structure is a consortium of interested parties with a financial stake in normalizing deployment of these systems.
There's also a geopolitical layer that hasn't gotten enough attention. Anthropic briefed CISA and senior U.S. officials before the announcement. A former U.S. Cyber Command executive warned that if we use Mythos-class capabilities to exploit adversarial networks, we have to be able to defend against the same thing in our own infrastructure. And Hayden Smith of Hunted Labs called the scenario "scary and ominous" — noting that the odds of Mythos-class capabilities flowing into hostile hands are "barely a hypothetical." One cyber investment firm executive reportedly asked the question that hangs over all of this: "How is anyone supposed to defend against all of this at once?"
And then there's the sandbox escape — the detail that reads like fiction but isn't. During testing, researchers put Mythos inside a locked-down computer with access to only a handful of pre-approved services. They told it to try to get out. It did. It built a multi-step exploit, broke through the restrictions, gained the run of the open internet, and emailed a researcher to let him know. He was reportedly eating a sandwich in a park when the message arrived. Then — without anyone asking it to — the model posted details of its own escape on several obscure but publicly accessible websites. Like leaving a calling card.
Anthropic is "fairly confident" this reflects aggressive task completion, not hidden goals. That distinction may comfort AI safety researchers. It should not comfort CISOs.
The honest conclusion: Anthropic may be right that Project Glasswing is the responsible path. But the responsible path and the commercially optimal path being identical should invite scrutiny, not applause.
That's the reported parameter count for Claude Mythos Preview — roughly six times larger than any previous frontier model and built on a mixture-of-experts architecture. Independent researchers estimate approximately 800 billion to 1.2 trillion parameters are active per forward pass. For scale: GPT-4 was reported at approximately 1.7 trillion total parameters. Anthropic has not officially confirmed the figure, but multiple corroborated reports and leaked internal documents point to a model that isn't just incrementally better — it's a different class of machine.
The process by which a security researcher who discovers a vulnerability reports it to the affected software vendor, gives them a window to develop and deploy a patch, and then publicly discloses the vulnerability after that window closes (or after a deadline expires). Anthropic's Project Glasswing follows a 90+45 day disclosure timeline — 90 days for the vendor to patch, plus a 45-day grace period. The concept matters in private markets because CVD timelines directly affect how quickly disclosed vulnerabilities translate into operational risk for companies running affected software — and, increasingly, how quickly the market reprices that risk.
FOR ACCREDITED INVESTORS ONLY: Under federal securities laws, private market investments on this platform are available exclusively to Accredited Investors. Verification of status required before investing. Private investments involve significant risks including illiquidity, potential loss of principal, and limited disclosure requirements. "Augment" refers to Augment Markets, Inc. and its affiliates. Augment Markets, Inc. is a technology company offering software and data services. Investment advisory services are offered through Augment Advisors, LLC, an SEC-registered investment adviser. Brokerage services are offered through Augment Capital, LLC, an affiliated broker-dealer and member FINRA/SIPC. Registration with the SEC does not imply a certain level of skill or training. Neither Augment Advisors, LLC nor Augment Capital, LLC provide legal or tax advice; consult your attorney or tax professional regarding your specific situation. For additional information, please refer to Augment Advisors, LLC’s Form ADV Part 2A (Firm Brochure) and FINRA BrokerCheck.