On March 31, 2026, Anthropic did something no AI company wants to do. It accidentally handed the entire source code of its most commercially important product to the public internet. On March 31, 2026, Anthropic accidentally exposed the full source code of Claude Code through a 59.8 MB JavaScript source map file bundled in the public npm package @anthropic-ai/claude-code version 2.1.88.
Within hours, the code was downloaded, mirrored, forked, and analysed by thousands of developers worldwide.
The leak was not a hack. Nobody broke in. Anthropic left the door open by accident, and the entire developer community walked through it.
How a Single Debug File Exposed Everything
The mechanism behind the leak is both technically specific and embarrassingly simple. The leak resulted from a reference to an unobfuscated TypeScript source in the map file included in Claude Code’s npm package. Map files are used to connect bundled code back to the source. That reference pointed to a zip archive hosted on Anthropic’s Cloudflare R2 storage bucket that researchers were able to download and decompress.
Source map files are debugging tools. They exist to make minified, compressed production code readable during development. They are never supposed to ship in a public release. Anthropic acquired the Bun JavaScript runtime at the end of 2025, and Claude Code is built on top of it. A known Bun bug reported on March 11, 2026, states that source maps are served in production builds even when the documentation says they should not be. The bug was open for 20 days before this happened. Anthropic’s own acquired toolchain contributed to exposing Anthropic’s own product.
Software engineer Gabriel Anhaia noted in a deep dive into the exposed code that this should serve as a reminder to even the best developers to check their build pipelines. “A single misconfigured .npmignore or files field in package.json can expose everything,” Anhaia wrote.
The leaked file contained approximately 513,000 lines of unobfuscated TypeScript across 1,906 files, revealing the complete client-side agent harness. Snapshots of Claude Code’s source code were quickly backed up in a GitHub repository that was forked more than 41,500 times, disseminating it to the masses. A post on X sharing a link to the leaked code had more than 29 million views early on Wednesday, and a rewritten version of the source code quickly became GitHub’s fastest-ever downloaded repository.
What Anthropic Said
Anthropic’s response was swift, measured, and carefully worded. An Anthropic spokesperson told “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”
Anthropic also issued copyright takedown requests to contain the spread, an effort that had limited practical effect given how rapidly the code had been mirrored, forked, and rewritten in other programming languages.
What Was Actually Inside
The source code exposed significantly more than the architecture diagrams. Developers who dug through the 513,000 lines found a detailed picture of Anthropic’s internal product roadmap, unreleased features, internal model codenames, and performance metrics that were never meant to be public.
KAIROS: the always-on background agent. The leak reveals KAIROS, a feature flag mentioned over 150 times in the source. KAIROS represents a fundamental shift in user experience: an autonomous daemon mode. While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. In this mode, the agent performs “memory consolidation” while the user is idle, merging disparate observations, removing logical contradictions, and converting vague insights into absolute facts.
Internal model codenames and performance data. The source code confirms that Capybara is the internal codename for a Claude 4.6 variant, with Fennec mapping to Opus 4.6 and the unreleased Numbat still in testing. Internal comments reveal that Anthropic is already iterating on Capybara v8, yet the model still faces significant hurdles, a 29 to 30% false claims rate in v8, and an actual regression compared to the 16.7% rate seen in v4. For every competitor working on their own coding agents, these benchmarks are invaluable intelligence.
Anti-distillation mechanisms. A flag called ANTI_DISTILLATION_CC, when enabled, sends anti_distillation: [‘fake_tools’] in API requests. This tells the server to inject decoy tool definitions into the system prompt. The idea: if a competitor is recording Claude Code’s API traffic to train their own model, the fake tool definitions corrupt that training data.
An undercover mode. The file undercover.ts implements a mode that strips all traces of Anthropic internals when Claude Code is used in non-internal repos. It instructs the model to never mention internal codenames like “Capybara” or “Tengu,” internal Slack channels, repo names, or the phrase “Claude Code” itself.
A Tamagotchi. Claude generates a custom name and personality description for a virtual companion on first hatch. There are sprite animations and a floating heart effect. The planned rollout window in the source code is April 1 to 7, 2026. Someone at Anthropic was clearly having fun.
The Security Threat That Followed
The leak itself was damaging. What came after made it worse. The most immediate danger was a concurrent, separate supply-chain attack on the axios npm package, which occurred hours before the leak. Users who installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC may have inadvertently pulled in a malicious version of axios containing a Remote Access Trojan.
Threat actors moved quickly to weaponise the public attention around the leak. Zscaler ThreatLabz found that threat actors were seeding trojanised Claude Code versions with backdoors, data stealers, and cryptocurrency miners, including a Claude Code leak repository that tricks users into running a Rust-based dropper that deploys Vidar Stealer and GhostSocks, a tool used to proxy network traffic.
The exposed hook and permission logic make silent device takeover more reliable. Pre-existing vulnerabilities are now far easier to weaponise. Threat actors with full source visibility can craft precise malicious repositories that trigger arbitrary shell execution or credential theft simply by cloning or opening an untrusted repo.
Zscaler’s guidance was direct: do not download, fork, build, or run code from any GitHub repository claiming to be the leaked Claude Code. Verify every source against Anthropic’s official channels only.
This Was Not the First Time
This is the second time Anthropic has had a data leak in recent weeks. Fortune previously reported on a separate breach and noted that the company was storing thousands of internal files on publicly accessible systems, including a draft of a blog post that referred to an upcoming model known as “Mythos” and “Capybara.”
Outside developers have already reverse-engineered Claude Code, prompting a takedown notice from Anthropic. What is new is the roadmap: a clear picture of how Anthropic is building toward longer autonomous tasks, deeper memory, and multi-agent collaboration.
The timing compounds the embarrassment. Anthropic announced in early March 2026 that Claude Code was written almost entirely by AI. 24 days later, the source code it generated was publicly accessible to anyone who knew where to look.
The Lesson That Goes Beyond Anthropic
The leak won’t sink Anthropic, but it gives every competitor a free engineering education on how to build a production-grade AI coding agent and what tools to focus on. More broadly, it exposes a gap that exists across the AI industry, companies spending enormous resources securing their models while leaving the surrounding infrastructure, building pipelines, and release processes to standard software development practices that were not designed for this level of scrutiny.
Publishing map files is generally frowned upon, as they are meant for debugging obfuscated or bundled code and are not necessary for production. That principle was not new on March 31, 2026. It was just ignored, and the cost of ignoring it, at the scale Anthropic operates, turned a routine npm release into one of the most widely discussed source code exposures in recent AI history.
Cybersecurity incidents, AI vulnerabilities, and the security developments shaping the technology industry. The IT Horizon covers every story worth knowing about. Subscribe and stay ahead.





