Contact Sales
Contact Sales

What We Learned: Axios NPM Supply Chain Compromise Emergency Briefing

Key Takeaways and Actionable Guidance

Authored bySANS Institute
SANS Institute

SANS Faculty Fellow Joshua Wright and Certified Instructor Rich Greene went live from SANS 2026 Orlando to break down the Axios NPM supply chain compromise as it was still unfolding. Both were mid-course and filmed during their lunch break because that’s the kind of response this moment demanded.

If you haven’t watched the replay yet, here’s what you need to know.

What Happened

Just after midnight UTC on March 31, a threat actor published compromised versions of the axios NPM package (v1.14.1 and v0.30.4), one of the most widely used JavaScript libraries in the world, with an estimated 80 to 100 million downloads per week. The malicious versions introduced a dependency called plain-crypto-js, which deployed a remote access trojan across Windows, macOS, and Linux systems. The package was live for approximately three hours before it was pulled. In that window, an estimated 600,000 installs may have occurred.

The RAT immediately began scraping credentials, harvesting GitHub personal access tokens, AWS keys, Azure credentials, and other authentication material. No user interaction was required. If you pulled the package, you were compromised.

Why This Is Bigger Than One Package

Josh made the point that supply chain risk isn’t about the software you install. It’s about the software your software installs. He used the example of 7-Zip, which has 300 external dependencies. Your application may not call axios directly, but a library you depend on might, and a library that library depends on might.

As Josh put it: "It’s turtles all the way down."

This is exactly what Josh presented on at RSAC last week, where he warned that supply chain compromise is too rich an opportunity for attackers to ignore. Five days later, here we are.

Who May Be Behind This

Attribution is still early, but Josh shared that this incident has hallmarks of Team PCP, the English-speaking threat group linked to the Trivy and LiteLLM supply chain attacks. However, the axios attack shows some differences in sophistication that suggest a second actor may be involved, possibly DPRK-affiliated.

Josh’s working theory: Team PCP may be monetizing earlier compromised access by selling it to other threat groups. That’s a significant development in how supply chain attacks are being operationalized.

What the Multi-Platform Payload Tells Us

The compromised package delivered platform-specific payloads: AppleScript for macOS, PowerShell for Windows, and Python for Linux. Josh flagged the speed and coordination of this multi-platform delivery as a potential indicator of AI-enabled automation. While not confirmed, the operational tempo suggests tooling that goes beyond a single team working manually.

SANS President Ed Skoudis, who raised the initial alarm on this incident and approved the emergency response, put it plainly: the multi-platform reach isn’t just a technical detail. It’s a signal of intent. "It represents an attacker with more thorough and widespread aims. These people were reaching for the stars and compromising everything within sight."

What to Do Now: Three Audiences, Three Priorities

Josh was clear that remediation requires cross-functional coordination. This isn’t a ticket for one team.

Incident Response Teams

If developers or DevOps confirm exposure, treat this as a confirmed incident. Josh walked through the SANS response actions loop: scope, contain, eradicate, recover.

  • Start by scoping what credentials the attacker could have accessed.
  • If a compromised system had AWS credentials, your entire AWS environment is now in scope.
  • Containment doesn’t necessarily mean shutting everything off, but it does mean restricting the attacker’s access through security groups, ACLs, and credential rotation immediately.
  • Be alert to credential sprawl, where one set of stolen keys leads to another, then another. Josh warned this is a pattern he presented on at RSAC last year that applies directly here.

DevOps / CI/CD Teams

If you run nightly builds, smoke builds, or any automated pipeline that resolves NPM dependencies, check whether any builds ran between 00:00 and 03:00 UTC on March 31. If those builds pulled axios, assume compromise. This is especially critical because CI/CD environments often hold high-value secrets: NPM tokens, SSH keys, cloud credentials, and API keys.

Developers

Search all source code repositories for any use of axios, whether direct or as a transitive dependency. Check for versions 1.14.1 or 0.30.4 specifically. If you find the package plain-crypto-js anywhere in your node_modules, that is a confirmed indicator of compromise.

Why This Isn’t Over

Ed Skoudis stressed that the credential access from this attack is the real story. The axios compromise itself was an opening move. The attackers used it to harvest credentials that give them access to far more than one package. "The attackers, if they are smart, will go quiet now and then sometime later surprise us when they show that they have access to other packages through this attack we didn’t know that they nabbed."

In other words: today’s incident response is about axios. Tomorrow’s is about everything those stolen credentials unlock. Ed was direct: "This attack likely has long-term legs through the credentials gathered today, and we will face the fallout of that for months and beyond."

That means credential rotation isn’t optional and it isn’t a one-time task. If your environment was exposed, assume the attacker has tokens, keys, and access you haven’t found yet. Plan accordingly.

Key IOCs to Monitor

  • Malicious Versions: axios v1.14.1 and v0.30.4
  • Malicious Dependency: plain-crypto-js (any version)
  • C2 Domain: sfrclak[.]com
  • Exposure Window : 00:00 to ~03:00 UTC, March 31, 2026
  • Payloads: AppleScript (macOS), PowerShell (Windows), Python (Linux)
  • Targets: GitHub PATs, AWS keys, Azure credentials, SSH keys, cloud tokens

Additional IOCs will be published on the SANS blog as they are confirmed.

One More Thing

Josh closed with something worth repeating. Incident response is a marathon, not a sprint. If your team got the 2 AM call this morning, they have long nights ahead. Make sure they’re resourced, rested, and supported.

As Josh put it: "A hungry incident response team is not an effective incident response team."

Take care of your people. They’re going to need it.

Watch the full replay

Read the technical blog with IOCs

Follow SANS social channels for updates as new information develops.

We will continue to update the community as more details emerge. If your organization has been affected, SANS is here to help.