Claude Code source leak
Industry April 1, 2026

The Claude Code Leak: When Undefined Control Points Become Business Risks

A known bug in Anthropic's own build tool went unfixed for 20 days — and exposed the source code of the most important product from the industry's leading AI company. What the Claude Code incident teaches us about defining critical control points.

On March 31, the source code of Claude Code — Anthropic's flagship AI terminal tool — was accidentally published to npm. Over 512,000 lines of original TypeScript were reconstructed by the community in a matter of hours. What makes this incident remarkable is not the scale — it is the most significant source code leak in the AI industry, exposing the flagship product of its leading company.

The Chain of Events

In late 2025, Anthropic acquired Bun, a high-performance JavaScript runtime, and adopted it as the engine behind Claude Code. On March 11, 2026, a bug was reported in Bun's build system — issue #28001. The bug caused source maps to be included in production builds even when explicitly configured to be excluded.

Twenty days later, Claude Code version 2.1.88 was published to npm with the file cli.js.map accidentally included. That single file contained everything needed to reconstruct the original source code in full, readable TypeScript.

An Unfortunate Event for Anthropic

Anthropic is our trusted LLM provider — and seeing them affected by something like this is a reminder that no company is immune. They were the victim of a bug in their own tool. Bun — which they owned and maintained — had a documented defect that directly affected their production pipeline. The bug was known. It was reported. It simply wasn't treated as a critical control point.

A Company That Knows How to React — But Missed This One

Anthropic is no stranger to protecting their assets. When they detected unauthorized distillation of their LLM models, they responded by implementing anti-distillation techniques — including injecting fake tool definitions into API responses to poison the training data of competitors attempting to copy their logic. They identified the critical control point and acted. But with Claude Code, a bug reported in their own build tool 20 days before the leak was not treated with the same urgency. The pattern isn't that Anthropic doesn't know how to define critical control points — it's that even companies that do can miss them when they're not systematically mapped.

What This Teaches Us About AI Implementation

At Flintworks, we see this as a case study of what our BAAF® Framework addresses in its second layer: Digital Connectors (L2). This layer maps every digital asset of the business — APIs, databases, communication channels, third-party systems, development pipelines — and defines which ones are critical control points.

In Anthropic's case, the connector not classified as critical was a CI/CD system — their build and distribution pipeline. For a business implementing agentic AI systems, the critical connectors are varied: LLM provider APIs, customer communication channels, context databases, webhooks connecting workflows. The principle is the same: if you don't define which ones are critical, any of them can become the failure point that compromises everything.

L2's core attribute is Connection: AI maintains its integrity when connectors operate correctly. Anthropic's development pipeline connector failed them. Your business could be failed by a connector in your customer service pipeline.

The Lesson for Businesses

If one of the most advanced AI companies in the world can miss a critical control point in their own tooling, the question for every business implementing AI is clear: have you mapped your dependencies? Do you know which ones are critical?

AI implementation isn't just about choosing the right model or building the right agent. It's about understanding every link in the chain — from your business model to your digital assets and customer service channels — and knowing exactly where the risks are.

"The Claude Code leak is a clear example of what happens when critical control points are not defined. Anthropic knows how to react — they proved it with their anti-distillation techniques on their LLM models. But a bug reported 20 days earlier in their own build tool was not treated with the same urgency. In our BAAF® methodology, the Digital Connectors layer seeks to identify critical dependencies for the business, so nothing falls through the cracks." — Luis Maroto, Founder

Want to understand how the BAAF® Framework helps you map and protect your AI infrastructure? Learn about our methodology or contact us to start the conversation.