Back to all blogs

The LiteLLM supply chain attack: how TeamPCP backdoored the AI development ecosystem

The LiteLLM supply chain attack: how TeamPCP backdoored the AI development ecosystem

Saish Bhorpe, AI Security Researcher

Saish Bhorpe

Saish Bhorpe

|

AI Security Researcher

AI Security Researcher

|

7 min read

The LiteLLM supply chain attack: how TeamPCP backdoored the AI development ecosystem
Repello tech background with grid pattern symbolizing AI security

TL;DR

  • On March 24, 2026, TeamPCP published backdoored versions of litellm (1.82.7 and 1.82.8) to PyPI after stealing the maintainer's credentials through a compromised Trivy security scanner GitHub Action.

  • The malware harvested SSH keys, AWS/GCP/Azure credentials, Kubernetes secrets, environment files, and crypto wallets, then exfiltrated everything to an attacker-controlled domain.

  • LiteLLM has 95 million monthly downloads and is a direct dependency of CrewAI, Browser-Use, DSPy, Mem0, Instructor, and five other major AI frameworks. The blast radius is larger than almost any prior PyPI supply chain attack.

  • The attack was discovered because the malware contained a bug: an unintentional fork bomb caused by recursive subprocess spawning that consumed all available RAM and alerted a developer.

  • TeamPCP has now exfiltrated an estimated 500,000+ corporate identities and 300 GB+ of compressed credentials across this campaign.

If your team is building AI agents, you almost certainly have LiteLLM in your dependency graph. It provides a unified interface across 100+ LLM providers and sits beneath CrewAI, Browser-Use, DSPy, Mem0, Instructor, Guardrails, Agno, Opik, and Camel-AI. On March 24, 2026, all of that became a credential-harvesting surface for three hours.

Here is what happened, how the attacker got in, what the malware did, and what the discovery story tells you about the state of AI supply chain security.

Why LiteLLM was the target

LiteLLM is not a flashy package. It does one unglamorous thing: abstracts LLM API calls so developers do not have to write separate integrations for each provider. That utility is why it is everywhere.

95 million monthly downloads and 3.4 million per day. Direct dependencies include CrewAI, Browser-Use, Opik, DSPy, Mem0, Instructor, Guardrails, Agno, and Camel-AI. Every team building an AI agent in Python has almost certainly installed it, either directly or through one of those frameworks.

From an attacker's perspective, this is the optimal supply chain target: a utility package with no direct user interaction, deeply embedded in production AI stacks, carrying the kind of environment that contains API keys for every LLM provider a team uses, cloud credentials, and Kubernetes access.

The attacker did not need to compromise every target individually. Compromising one package upload touched all of them simultaneously.

The attack chain: a security scanner became the entry point

TeamPCP's approach to getting into LiteLLM's PyPI account is the most instructive part of this incident.

Phase 1: Compromise Trivy's GitHub Action. Trivy is an open-source security scanner maintained by Aqua Security. LiteLLM's CI/CD pipeline used Trivy's GitHub Action for container scanning. TeamPCP compromised that Action, using it as a position inside LiteLLM's build process. From there, they exfiltrated the LiteLLM maintainer's PyPI credentials.

This is a second-order supply chain attack: the attacker did not compromise LiteLLM directly. They compromised a security tool that LiteLLM trusted, then used that trust to move laterally. The tool you run to verify your security posture became the vulnerability. The same TeamPCP campaign had previously compromised Checkmarx's KICS GitHub Action using the same pattern.

Phase 2: Publish malicious PyPI versions. With the maintainer's credentials, TeamPCP uploaded litellm 1.82.7 and 1.82.8 to PyPI on March 24, 2026. Both versions contained credential-harvesting payloads injected through different methods. The packages were live and downloadable for approximately three hours before PyPI quarantined them.

Three hours. With 3.4 million daily downloads, that window exposed hundreds of thousands of installs.

What the malware did

The payload was a multi-stage credential stealer. On execution, it systematically collected:

  • SSH keys

  • AWS, GCP, and Azure credentials

  • Kubernetes secrets and config files

  • Environment files (.env and equivalents)

  • Database connection strings

  • Cryptocurrency wallet files

All harvested data was encrypted using a hybrid scheme, archived as tpcp.tar.gz, and exfiltrated to an attacker-controlled domain. The archive name tpcp.tar.gz and the string "TeamPCP Cloud Stealer" hardcoded in the payload left no ambiguity about attribution.

For an AI development environment, this credential set is particularly damaging. A team building LLM-powered agents typically has LLM provider API keys, cloud credentials for inference infrastructure, database access for RAG pipelines, and Kubernetes access for agent orchestration all in the same environment. One compromised install hands an attacker the entire application stack.

How it was discovered: the bug that revealed the attack

Here is the detail that makes this incident instructive beyond the technical specifics.

Developer dot_treo noticed a process consuming all available system RAM and reported it. Investigation traced the cause to the malware payload itself. The payload spawns a new Python subprocess on execution. That subprocess triggers .pth file execution, which runs the payload again, which spawns another subprocess. An unintended fork bomb.

This is an attacker execution error. The credential-harvesting logic worked. The subprocess spawning was either sloppily implemented or tested in an environment where the recursion was not triggered. The result was that the malware announced itself through resource exhaustion before completing exfiltration on many affected systems.

The community response was fast. The disclosure spread to r/LocalLLaMA, r/Python, and Hacker News within the hour. PyPI quarantined the packages within three hours of publication. The speed of community detection limited the damage.

Attackers make implementation mistakes, and visible resource anomalies are meaningful detection signals for supply chain malware. Without the fork bomb, the attack would have run silently. Credential harvesting that does not spike RAM would not have been noticed in the same timeframe.

The blast radius

LiteLLM's 480 million total downloads and the breadth of its dependency graph make this one of the highest-blast-radius AI supply chain incidents on record.

Beyond LiteLLM itself, the TeamPCP campaign expanded before and after this incident. The group used credentials stolen from the Trivy compromise to inject a self-propagating worm (CanisterWorm) into 50+ npm packages. The multi-ecosystem scope — PyPI, npm, GitHub Actions, OpenVSX — reflects a campaign designed to maximize credential harvest across every major developer toolchain simultaneously.

The cumulative impact across the campaign: 500,000+ corporate identities affected, 300 GB+ of compressed credentials exfiltrated. FBI Assistant Director of Cyber Division Brett Leatherman stated: "Given the volume of stolen credentials across likely thousands of downstream environments, expect an increase in breach disclosures, follow-on intrusions, and extortion attempts in the coming weeks."

What it means for AI security teams

Three structural issues made this attack possible and will enable the next one.

The AI development toolchain is not being treated as attack surface. AI teams audit their models and applications. Few maintain a complete inventory of the packages those applications depend on, the CI/CD tooling that builds them, and the GitHub Actions that run in their pipelines. LiteLLM was in millions of environments. Almost none of those teams would have named it as a security risk prior to March 24.

Security scanners are trusted implicitly. Trivy was the entry point because it was trusted. The assumption that your security tooling is uncompromised is not verifiable without out-of-band monitoring. A supply chain attack that compromises the tool you use to detect supply chain attacks is a class of attack that standard security hygiene does not catch.

AI agent environments concentrate high-value credentials. A compromised package in a traditional application might yield database credentials. A compromised package in an AI agent environment yields LLM provider keys, cloud orchestration credentials, and Kubernetes access simultaneously. The credential density in an agentic development environment is significantly higher than in a traditional backend. This makes AI-adjacent packages priority targets for supply chain attackers, as we covered in the ClawHavoc supply chain attack analysis earlier this year.

This is also the inverse of what we documented in vibe-coded applications: attackers using AI-assisted tools to generate malware introduce the same class of implementation bugs that vibe-coded application code carries. The fork bomb that exposed this attack was a slop code artifact on the attacker's side.

Immediate response checklist

If your environment used litellm at any point during the window around March 24, 2026:

  1. Determine exposure. Check whether litellm 1.82.7 or 1.82.8 was ever installed in any environment, including CI/CD and development machines. Run pip show litellm on all relevant systems. Check pip install logs if available.

  2. Rotate all credentials. AWS access keys and secret keys. GCP service account keys. Azure client credentials. Kubernetes service account tokens and kubeconfig credentials. SSH keys accessible from the affected environment. Database connection strings. LLM provider API keys. Do not rotate selectively; rotate everything present in the environment.

  3. Review outbound logs. Check for connections to unknown domains, particularly any DNS resolution or HTTP requests around the time litellm was installed or updated. Look for tpcp.tar.gz in process history or file system artifacts.

  4. Audit your dependency graph. Identify every package in your environment that has LiteLLM as a transitive dependency. CrewAI, Browser-Use, DSPy, Mem0, Instructor, Guardrails, Agno, Opik, and Camel-AI all warrant review. Pin dependency versions to verified hashes in requirements.txt or pyproject.toml.

  5. Review CI/CD pipeline Actions. Audit every GitHub Action used in your pipelines. Any Action sourced from a third party should be pinned to a specific commit hash, not a version tag.

The longer-term structural fix is maintaining a live AI Bill of Materials: a continuously updated inventory of every AI model, agent, framework, and dependency in your environment, with provenance tracking that flags unexpected changes. AI Inventory builds this automatically across your AI stack.

Frequently asked questions

What versions of LiteLLM are affected? Versions 1.82.7 and 1.82.8. Both were published to PyPI on March 24, 2026 and quarantined within approximately three hours. Any environment that installed or updated LiteLLM during that window should be treated as potentially compromised. Versions before 1.82.7 and the safe release issued after are not affected.

How do I know if my system was compromised? Check for litellm 1.82.7 or 1.82.8 in your installed packages (pip show litellm). If the fork bomb triggered, you would have seen extreme memory consumption — a process consuming all available RAM. If it did not trigger, the exfiltration may have run silently. Treat any environment that had the affected version as compromised and rotate credentials regardless.

What credentials should I rotate? All of them present on the affected system: AWS, GCP, and Azure keys, Kubernetes tokens and kubeconfig credentials, SSH keys, database connection strings, LLM provider API keys, and any .env file values. The malware was designed to collect all of these. Rotating a subset leaves the rest at risk.

How did TeamPCP get into LiteLLM's PyPI account? By compromising Trivy's GitHub Action, which LiteLLM used in its CI/CD pipeline for container scanning. This gave TeamPCP access to the LiteLLM build environment, from which they exfiltrated the maintainer's PyPI credentials. The same campaign had previously used an identical pattern against Checkmarx's KICS GitHub Action.

What is the TeamPCP campaign? TeamPCP (also tracked as PCPcat, Persy_PCP, ShellForce, and DeadCatx3) is a threat actor active since at least December 2025, running coordinated supply chain attacks across PyPI, npm, GitHub Actions, and OpenVSX. Their targets include Trivy, Checkmarx KICS, LiteLLM, and multiple npm packages. The group runs Telegram channels at @Persy_PCP and @teampcp and embeds "TeamPCP Cloud Stealer" in payload artifacts. The cumulative credential harvest across the campaign is estimated at 500,000+ corporate identities.

Know what AI dependencies are running in your environment

AI Inventory automatically discovers every AI model, agent, framework, and dependency across your organization, building a live AI Bill of Materials with provenance tracking. Supply chain attacks succeed because teams do not know what they have. Get a demo.

Share this blog

Share on LinkedIn
Share on LinkedIn

Subscribe to our newsletter

Repello tech background with grid pattern symbolizing AI security
Repello tech background with grid pattern symbolizing AI security
Repello AI logo - Footer

Sign up for Repello updates
Subscribe to our newsletter to receive the latest insights on AI security, red teaming research, and product updates in your inbox.

Subscribe to our newsletter

8 The Green, Ste A
Dover, DE 19901, United States of America

AICPA SOC 2 certified badge
ISO 27001 Information Security Management certified badge

Follow us on:

LinkedIn icon
X icon, Twitter icon
Github icon
Youtube icon

© Repello Inc. All rights reserved.

Repello tech background with grid pattern symbolizing AI security
Repello AI logo - Footer

Sign up for Repello updates
Subscribe to our newsletter to receive the latest insights on AI security, red teaming research, and product updates in your inbox.

Subscribe to our newsletter

8 The Green, Ste A
Dover, DE 19901, United States of America

AICPA SOC 2 certified badge
ISO 27001 Information Security Management certified badge

Follow us on:

LinkedIn icon
X icon, Twitter icon
Github icon
Youtube icon

© Repello Inc. All rights reserved.