LiteLLM supply chain attack — AI API keys and cloud credentials compromised
Industry March 30, 2026

The LiteLLM Incident: A Wake-Up Call for AI Infrastructure Security

The LiteLLM supply chain attack compromised API keys for OpenAI, Anthropic, and Azure across 36% of cloud environments. Flintworks was not affected — here's what happened and why AI infrastructure security matters.

On March 24, a threat actor known as TeamPCP published two backdoored versions of LiteLLM — one of the most widely used AI proxy libraries in the Python ecosystem — to PyPI. In what is being considered the largest supply chain attack in AI history, anyone who installed version 1.82.7 or 1.82.8 during a three-hour window unknowingly deployed a credential stealer targeting API keys for OpenAI, Anthropic, Azure, cloud provider credentials, SSH keys, and Kubernetes secrets.

What Is LiteLLM and Why Does It Matter?

LiteLLM is an AI gateway that sits between your applications and LLM providers. By design, it has access to every API key you configure. It's present in roughly 36% of cloud environments and is downloaded 3.4 million times per day. Compromising this single library gave attackers a direct path to the most sensitive credentials in AI infrastructure.

How Did It Happen?

The attack was a cascading supply chain compromise. First, the attackers poisoned Trivy — a popular open source security scanner — by rewriting Git tags in its GitHub Action repository. When LiteLLM's CI/CD pipeline ran Trivy as part of its build process, the compromised scanner exfiltrated the maintainer's PyPI publishing token. With that token, TeamPCP published the backdoored versions directly to PyPI.

The Payload: A Three-Stage Attack

The malicious code deployed a credential harvester sweeping SSH keys, cloud credentials, .env files, and cryptocurrency wallets; a Kubernetes lateral movement toolkit deploying privileged pods across every node; and a persistent systemd backdoor for ongoing remote access. All stolen data was encrypted and exfiltrated to a domain mimicking legitimate LiteLLM infrastructure.

Flintworks Was Not Affected

We don't use LiteLLM in our infrastructure. We connect to LLM providers directly or through trusted gateways like Amazon Bedrock — we don't rely on third-party open source proxy layers that concentrate all credentials in a single point of failure. This incident reinforces why choosing your AI infrastructure carefully and managing its security is not something you want to leave to chance.

The Bigger Picture

This attack is a wake-up call for every business running AI in production. If your team installed packages without pinned versions, if your CI/CD pipeline pulls dependencies without verification, or if you don't know exactly which AI libraries are running in your environment — you may be more exposed than you think. AI infrastructure security isn't optional. It's foundational.