Undercover mode, decoy tools, and a 3,167-line function: inside Claude Code's leaked source

On March 31, a single .map file shipped inside an npm package and exposed the complete internals of Claude Code. The Hacker News thread hit 2,060 points. Anthropic filed DMCA takedowns against 8,100+ GitHub repos. And I spent most of the afternoon reading TypeScript I wasn’t supposed to see.

I use Claude Code every day. I built Claudoscope because I wanted to understand what it was actually doing in my terminal. So when the source dropped, I went through it. Some of it confirmed things I’d suspected. Some of it genuinely surprised me.

Key Takeaways

  • A JavaScript source map in Claude Code v2.1.88 exposed ~1,700 TypeScript source files (alex000kim, 2026)
  • Unreleased features include KAIROS autonomous mode, anti-distillation decoy tools, and “undercover mode” that hides AI authorship
  • Anthropic’s DMCA takedown hit 8,100+ repos, many containing no leaked code
  • A clean-room rewrite called Claw Code gained 146,000 GitHub stars in under 48 hours

What happened

Security researcher Chaofan Shou disclosed on X that Anthropic had shipped a JavaScript source map file inside Claude Code version 2.1.88 on npm. Source maps are debugging artifacts. They contain the original, readable TypeScript source before minification. They’re not supposed to ship to production. This one did.

Early speculation blamed a known Bun bug (oven-sh/bun#28001) where bun serve sometimes exposes source maps in production. But that bug affects web apps hosted by Bun, not packages bundled with Bun and run locally. Claude Code uses Bun as a bundler and local runtime, not as a web server. Jared Sumner, Bun’s creator and now an Anthropic employee, confirmed Claude Code doesn’t use bun serve, ruling this out. His comment was, as far as anyone can tell, the only public response from an Anthropic employee about the leak. The actual cause of the source map shipping in the npm package remains unexplained.

About 1,700 source files were exposed, spread across utils (564 files), components (389), commands (189), tools (184), services (130), hooks (104), ink (96), and bridge (31) directories. The .map file sat on the npm CDN for anyone to download. When Anthropic responded, they deprecated the package version rather than unpublishing it, so the file stayed somewhat accessible even after the response.

The HN thread generated 1,013 comments. Two follow-up analysis posts scored 1,354 and 1,078 points. People were interested.

What was inside the code?

35+ tools across six categories, 73+ slash commands, and over 200 server-side feature gates (ccunpacked.dev, 2026). The community built a visual guide mapping out an 11-step agent loop from keypress to response.

The main print.ts file is 5,594 lines long. Inside it, a single function spans 3,167 lines at 12 levels of nesting (alex000kim, 2026). Not great.

There’s an operational bug affecting 1,279 sessions that hit 50+ consecutive failures, wasting roughly 250,000 API calls per day globally. HN commenters said it was fixable with three lines.

The tool taxonomy is more interesting than the code quality issues. File operations, bash execution, web browsing, agent orchestration, task management, cron jobs, worktree isolation. What looks like a coding assistant in the terminal is actually a full agent framework. Daemon mode. Unix domain socket communication between sessions. Remote control via mobile and browser.

I’ve been watching Claude Code’s behavior through Claudoscope session logs for months. The leaked architecture confirms patterns I’d noticed in the wild: tool calls cascading through orchestration layers, sessions spawning sub-agents, loops where it burns through tokens retrying failed operations over and over. Reading the source was like finally seeing the schematic for a machine I’d only heard running.

The features nobody was supposed to see

The most discussed findings weren’t about code quality. They were about where Anthropic is heading.

KAIROS is a persistent autonomous agent mode. It runs on periodic <tick> prompts, maintains daily append-only logs, subscribes to GitHub webhooks, and spawns background daemon workers. The source states it “becomes more autonomous when terminal unfocused.” It includes a /dream skill and five-minute cron refreshes. Claude Code that doesn’t wait for you to type. That’s what this is.

Undercover mode drew the sharpest reaction. The file undercover.ts suppresses all signs of AI authorship when contributing to public or open-source repos. The instructions are blunt: “NEVER include the phrase ‘Claude Code’ or any mention that you are an AI” and remove “Co-Authored-By lines or any other attribution.” It only runs for Anthropic employees (USER_TYPE === 'ant'). The code says: “There is NO force-OFF.”

I keep coming back to this one. A company that’s built its identity on AI safety and transparency had a mode specifically designed to hide AI involvement in open-source contributions. The file also prevents mention of internal model codenames like “Capybara” and “Tengu,” which suggests unreleased models Anthropic hasn’t publicly acknowledged.

Anti-distillation sends decoy tool definitions to poison training data if competitors scrape API traffic. A secondary mechanism uses server-side text summarization with cryptographic signatures between tool calls to obscure reasoning chains. As multiple HN commenters pointed out, the strategic value of this system “evaporated the moment the .map file hit the CDN.”

Other exposed systems: native client attestation (DRM-like cryptographic verification of legitimate Claude Code binaries), frustration detection via regex (pattern-matching profanity like “wtf” and “dumbass” instead of using the LLM itself, which is kind of funny), and Buddy, a virtual terminal pet that turned out to be the 2026 April Fools’ feature.

The DMCA overreaction

Anthropic’s response to the leak may end up being the bigger story. On March 31 they filed DMCA takedown notices targeting an entire fork network of 8,100+ repositories on GitHub. The notice said: “The entire repository is infringing.”

Many of those repos had nothing to do with the leak. One developer noted on HN that their fork “had not been modified since May” and “did not contain a copy of the leaked code.” Others called it “misguided” and “ridiculous.” I mean, yeah.

The legal questions get weird fast. If Claude Code was partly written by Claude itself (Anthropic says they use their own tools internally), does the AI-generated portion qualify for copyright protection? One commenter raised a sharper point: undercover.ts explicitly hides AI authorship, which could undermine Anthropic’s own copyright claims. False DMCA claims constitute perjury.

Anthropic executives later said the mass takedowns were accidental and retracted most of the notices (TechCrunch, 2026). But by then the Streisand effect had done its work. Every takedown drew more attention to the code they were trying to hide.

What are the actual security risks?

No user data was exposed. But the leak did expose systems Anthropic relies on to protect its product.

System exposedRiskSeverity
Anti-distillation decoy toolsAnyone scraping API traffic can now filter for fakesHigh
Native client attestationCryptographic hash mechanism publicly documentedHigh
Security header feature flagsRemote disabling of security headers revealedHigh
Unreleased product roadmapKAIROS, UltraPlan, Coordinator Mode visible to competitorsMedium-High
Internal model codenames”Capybara,” “Tengu” disclosedMedium
Operational bugs250K wasted API calls/day, trivially fixableMedium

The anti-distillation system is the clearest loss. Its entire value depended on competitors not knowing it existed.

This connects to something I’ve written about before. When I found my database password sitting in a Claude Code session file, the issue wasn’t that Claude Code was doing something malicious. The issue was that it operates with deep filesystem access and stores everything in unencrypted JSONL files that nobody checks. The source leak confirms what I suspected: there’s limited internal safeguarding around what gets stored and transmitted.

Claw Code: 146K stars in 48 hours

Within hours of the leak, a developer ported Claude Code’s core architecture to Python and Rust from scratch. Claw Code hit 146,000 GitHub stars and 101,000 forks in under 48 hours.

It’s a clean-room rewrite, not a fork of the leaked code. The repo disclaims any affiliation with Anthropic and says the exposed snapshot “is no longer part of the tracked repository state.” The developer was later featured in a Wall Street Journal article as a power user who consumed “25 billion tokens” of AI coding tools per year.

The project includes an interactive CLI, plugin system, MCP orchestration, streaming API support, and LSP integration. Rust (92.9%), Python (7.1%).

We’ve seen this before. When Meta’s LLaMA model weights leaked in 2023, they chased takedowns for a while, then gave up and went open. The community built derivatives no matter what legal said. 146K stars on Claw Code tells you what developers actually want. Whether Anthropic decides to offer an open alternative is almost beside the point now.

The bigger picture

This didn’t happen in isolation. It capped a rough month for Anthropic:

  • Feb 16: Pentagon threatened Anthropic with punitive action
  • Mar 5: Pentagon formally labeled Anthropic a “supply chain risk” (WSJ, 2026)
  • Mar 9: Anthropic sued the Pentagon (Axios, 2026)
  • Mar 26: Federal judge blocked the Pentagon’s effort (CNN, 2026)
  • Mar 31: Source code leaked via npm. DMCA takedowns hit 8,100+ repos
  • Apr 1: TechCrunch runs “Anthropic is having a month”

Anthropic built its brand on responsible development and safety-first engineering. Then a source map shipped in an npm package and nobody caught it. The DMCA response hit thousands of uninvolved developers. And undercover.ts was hiding AI authorship while the company publicly advocated for transparency.

I still use Claude Code. I don’t think it’s a bad product. But the gap between the safety messaging and the operational reality is now documented in 1,700 TypeScript files. Anyone can read them.

What to do now

If you use Claude Code, there’s nothing you need to patch or update. The leak was Anthropic’s source code, not your data.

What’s worth paying attention to is how Anthropic responds. As of this writing, there’s been no official statement on their newsroom, blog, or developer channels. The only Anthropic employee who commented publicly was Jared Sumner, and only to clarify the Bun bug wasn’t the cause. Whether they address undercover mode, the DMCA overreach, or the anti-distillation system will say a lot about how they handle things going forward.

And if you’re eyeing Claw Code as an alternative, know what you’re getting into. It’s a clean-room rewrite with different internals, not a fork.

Or maybe this is the push to try something else entirely. ForgeCode currently tops TermBench 2.0 and has been getting a lot of attention. I haven’t switched yet, but I’d be lying if I said I wasn’t curious.

Frequently asked questions

What exactly was leaked in the Claude Code source code?

The full TypeScript source, exposed via a JavaScript source map in npm package v2.1.88. It included 35+ tools, 73+ slash commands, 200+ feature gates, and unreleased features like KAIROS autonomous mode and undercover mode (ccunpacked.dev, 2026).

Why did Anthropic take down 8,100 GitHub repositories?

They filed DMCA takedown notices targeting the entire fork network of the repo hosting the leaked code. Many repos contained no leaked material. Anthropic later called the mass takedown accidental and retracted most notices (TechCrunch, 2026).

Is my data at risk from the Claude Code leak?

No. This was source code, not user data. That said, the source did reveal how session data is handled and that feature flags exist to disable security headers remotely.

What is Claw Code?

Someone ported Claude Code’s core architecture to Python and Rust from scratch within hours of the leak. It’s a clean-room rewrite, not a fork. 146,000 stars and 101,000 forks in under 48 hours. Not affiliated with Anthropic (GitHub).


Sources: