Loading...
Loading...
Mozilla leveraged Anthropic’s Claude Mythos preview and other LLMs to automatically generate, triage, and help fix an unprecedented wave of latent security bugs in Firefox. Advanced model capabilities combined with refined prompting and orchestration turned noisy outputs into high-quality, actionable reports that exposed JIT/WebAssembly, DOM/XSLT, IPC race conditions, serialization/deserialization, DNS/HTTPS parsing, and event-loop reentrancy issues—including decades-old flaws. The effort scaled remediation dramatically (monthly fixes jumped into the hundreds), demonstrated how generative models can materially strengthen complex open-source software, and prompted Mozilla to encourage defender-focused AI practices while cautioning about coordinated disclosure and patching discipline.
Mozilla used Anthropic’s Claude Mythos Preview and other large language models to discover and fix an unusually large set of latent security bugs in Firefox, then published a curated sample of the findings. The team says model capabilities and improved prompting/stacking techniques produced high-quality, actionable bug reports across many browser subsystems—covering JIT/WebAssembly, IPC race conditions, sandbox escapes, UAFs, legacy XSLT issues, DNS parsing edge cases, and event-loop reentrancy. Mozilla withheld full details initially to protect users but shared examples to encourage defenders to adopt similar AI-assisted auditing approaches. The move highlights both the growing potency of LLMs for vulnerability discovery and the need for responsible, defensive workflows in open-source security.
Mozilla used Anthropic’s Claude Mythos Preview and other LLMs to discover and fix an unprecedented number of latent security bugs in Firefox, then published a sample of the reports to show how AI can help harden complex software. The findings span JIT/WebAssembly issues, long‑standing DOM and XSLT bugs, IPC race conditions enabling UAFs and sandbox escapes, serialization/deserialization hazards, DNS/HTTPS parsing edge cases, and event-loop reentrancy problems. Mozilla says rapidly improving model capabilities plus better prompting/stacking techniques produced high‑quality, actionable reports that materially improved browser security and urges other projects to adopt similar defender‑oriented AI practices while balancing disclosure and patching timelines.
Mozilla used access to Anthropic's Claude Mythos preview to automatically generate and triage AI-driven security bug reports, enabling engineers to find and fix hundreds of long-standing vulnerabilities in Firefox. Improved model capability plus refined prompting and orchestration techniques turned noisy LLM outputs into high-quality signals, surfacing issues including a 20-year-old XSLT bug and a 15-year-old <legend> element flaw. Firefox’s defense-in-depth blocked many of the attempts, but the program scaled remediation: Mozilla’s monthly fixes rose from roughly 20–30 in 2025 to 423 in April 2026. The outcome shows how advanced generative models can materially aid security research and harden major open-source software when paired with careful engineering.
Behind the Scenes Hardening Firefox with Claude Mythos Preview