There is a comic by Randall Munroe — xkcd number 2347 — that has become shorthand for the modern software stack. It depicts digital infrastructure as a precarious tower balanced on a tiny, underappreciated open source project. The joke lands because anyone who has traced a dependency graph through npm, pip, Maven, Cargo or another package manager knows how much production software rests on code maintained by strangers.
In March 2024, the joke stopped being funny. XZ Utils, a compression library embedded deep in the Linux ecosystem, was discovered to contain a sophisticated backdoor. XZ Utils ships with virtually every Linux distribution — it compresses packages, kernel images, and system archives across the entire ecosystem. On distributions that patch OpenSSH to link against libsystemd, which in turn depends on liblzma, the backdoor gave its operator a path into the SSH daemon itself: the protocol used to remotely administer nearly every Linux server on the internet. The project had effectively come to depend on a single long-serving maintainer, Lasse Collin, who had publicly described his limited capacity, long-term mental health issues, and the fact that the work was unpaid.1
Someone had been listening. And they had been patient.
The long con
The account that would eventually plant the backdoor appeared in October 2021. Operating under the name Jia Tan, the contributor began with small, legitimate patches — an .editorconfig file here, a minor fix there. Over the following months, the contributions grew in scope and quality. Code reviews. Translations. CI/CD improvements. Infrastructure work. Every commit looked genuine and useful, and together they built trust.2
Then came the pressure campaign. In May and June of 2022, at least two suspected sock puppet accounts — Jigar Kumar and Dennis Ens — appeared on the xz-devel mailing list. Their messages were pointed and persistent: the project was releasing too slowly, the maintainer was not keeping up, new co-maintainers were needed. When Collin responded publicly that he had not lost interest but that long-term mental health issues had limited his capacity to care about the project, one of the accounts replied with calculated sympathy, gently suggesting he recognise his limits and pass maintainership to someone more active.1
By 2023, Jia Tan's role had expanded well beyond drive-by fixes. The contributions now reached into infrastructure and lower-level technical areas that later became important to understanding how the backdoor worked.3
The backdoor itself was planted across January and February 2024, hidden in binary test files that appeared to be corrupted compressed data. The malicious code existed only in the release tarballs distributed to package managers, not in the git repository source — meaning anyone reviewing the source code on GitHub would see nothing suspicious. A modified build process decoded the test files during compilation and injected malicious logic into liblzma. From there, the payload used glibc's indirect-function machinery to hijack RSA_public_decrypt at runtime, creating a path to unauthenticated access or code execution during the SSH authentication flow.45
The dependency chain that made this possible was subtle. OpenSSH itself was not directly linked against liblzma, but downstream builds could still pull liblzma into sshd through systemd-related integration.45
CVE-2024-3094 received a CVSS score of 10.0, and the malicious code was present in XZ Utils 5.6.0 and 5.6.1.4 On affected builds, the backdoor could give an attacker unauthenticated remote code execution during the SSH authentication flow.45
Had it not been caught, the consequences would have been severe. Fedora 40 and Ubuntu 24.04 LTS — two of the most widely deployed Linux distributions — were weeks from stable release with the compromised packages included. Those releases would have propagated to millions of production servers, cloud instances, container images, and development machines. Every affected system running OpenSSH with the systemd-linked sshd would have been silently accessible to whoever held the Ed448 private key — government infrastructure, financial systems, healthcare networks, cloud platforms — all reachable without authentication. The attacker would not have needed to breach any of them individually. The backdoor would have been delivered as a routine system update, installed by administrators doing exactly what security best practice tells them to do: keep your software up to date.25
The discovery was accidental. Andres Freund was running performance benchmarks on a Debian unstable system when he noticed that SSH logins were taking roughly 500 milliseconds longer than expected. The sshd process was consuming surprising amounts of CPU even when connections immediately failed. Freund traced the anomaly to liblzma, followed the build scripts, and disclosed the backdoor on the oss-security mailing list on 29 March 2024.5 It was a human investigation prompted by a strange performance signal, not a routine package audit.
Five hundred milliseconds. That was the distance between a routine benchmark and a supply chain attack with unusually broad potential consequences.
The XZ Utils incident was not the first time a disengaged maintainer had been socially engineered into handing over access. In November 2018, the event-stream npm package was compromised after a developer called right9ctrl convinced the original maintainer, Dominic Tarr, to hand over publishing rights. Tarr had lost interest in the project and was happy to let someone else take over. The new maintainer added a dependency called flatmap-stream containing encrypted malicious code that specifically targeted the Copay cryptocurrency wallet. The attack went undetected for roughly two and a half months.6
The pattern is structural. Harvard and Linux Foundation research found that many critical open source projects still depend on very small maintainer groups.7 The xkcd comic was prophecy: critical infrastructure often rests on very small maintainer groups.
The worm
If the XZ Utils backdoor demonstrated that a patient human operator could infiltrate the dependency chain, what happened in September 2025 demonstrated something worse: the dependency chain could be made to attack itself.
Security researchers later documented Shai-Hulud, a worm-like npm supply chain campaign that turned compromised packages into propagation engines. When one of the poisoned packages landed in a CI environment with npm publishing tokens available, the malware used those tokens to publish malicious versions of other packages the victim maintained. Each newly infected package could then repeat the process with whoever installed it next.89
The real target was not any individual package or organisation. It was the trust relationship between CI/CD pipelines and package registries — the automated infrastructure that modern software delivery depends on. The worm harvested credentials from local filesystems and cloud environments — AWS access keys, GCP service account tokens, Azure credentials, SSH private keys, and Kubernetes configs — and exfiltrated them to attacker-controlled infrastructure. But the credential theft was almost secondary. The primary payload was propagation: each infected package became an autonomous agent that tried to publish poisoned versions of every other package its victim maintained, expanding the blast radius with every installation.810
The September 2025 campaign affected more than 500 packages before CISA issued an advisory.9 A follow-on campaign in November 2025, commonly called Shai-Hulud 2.0, later reached 796 packages and affected maintainer accounts tied to widely used projects including Zapier, PostHog and Postman.10
Had the worm propagated unchecked, the consequences would have been exponential rather than linear. Every additional compromised maintainer account turned its package portfolio into another cluster of infection paths. The danger was not just one poisoned dependency. It was recursive propagation through the trust relationships that package managers and CI systems treat as normal.810
The bitter irony is that the organisations most vulnerable were the ones that had embedded publish-capable npm tokens in their CI environments. Once install scripts could reach those credentials without a human in the loop, the same automation that speeds software delivery also widened the propagation path.810
The incident surfaced through community investigation into unusual release patterns and only later through formal alerts.89 That is the uncomfortable part: once a valid publishing token is in play, routine automation becomes a delivery channel.
The evolution is worth stating plainly. The XZ campaign unfolded over more than two years as Jia Tan accumulated trust with a maintainer.21 Shai-Hulud did not require comparable relationship-building once it was seeded; it propagated through the dependency graph on its own.810 The industry had spent decades worrying about attackers breaking into systems. Now the systems were breaking into each other.
The two-hour window
On 31 March 2026, the Axios npm package was compromised. Axios is a ubiquitous HTTP client library used at enormous scale across the JavaScript ecosystem.11 If you have written JavaScript that talks to an API in the past decade, there is a reasonable chance Axios is somewhere in your dependency tree.
An attacker gained access to the npm publishing credentials of the lead maintainer account and published two backdoored releases: axios version 1.14.1 tagged as latest, and version 0.30.4 tagged as legacy. Both contained a hidden dependency on a package called plain-crypto-js, which had been published to npm roughly eighteen hours earlier — a clean version first, to give it registry history, then a malicious version shortly after. The dependency was never imported anywhere in the Axios source code. Its sole purpose was to execute a postinstall script.11
The real target was the developer's machine — and through it, everything that machine had access to. The payload was a cross-platform remote access trojan dropper targeting macOS, Windows, and Linux. It detected the operating system and launched platform-specific downloaders — AppleScript on macOS, a disguised PowerShell copy on Windows, a shell script on Linux — all contacting a command-and-control server. A RAT on a developer's workstation is not just a compromised laptop. It is access to source code repositories, cloud console sessions, SSH keys, database credentials, internal wikis, Slack channels, and every other system the developer can reach. In many organisations, a senior developer's machine is effectively a skeleton key to the entire technology stack. After execution, the malware deleted itself and replaced its own package.json with a clean stub, completing a self-cleanup cycle designed to evade forensic detection.1112
Had the compromise gone undetected for days rather than hours, the consequences would have cascaded far beyond the initial infection. CI/CD pipelines that ran npm install during the exposure window could have executed the RAT dropper — granting the attacker a path into build servers, deployment credentials, cloud accounts and internal networks. From there, the next steps could have been backdoored releases, source-code theft or lateral movement into production infrastructure.1112
StepSecurity detected the compromise within hours, and the malicious releases were removed after an attack window of roughly three hours.1112 The response was fast, but that only limits future exposure. Automated systems that installed the poisoned versions during the live window had already executed the payload.
The detection was fast — impressively so. But the bitter irony is that speed of detection is itself a defence that only works retroactively. Automated pipelines that ran npm install during those hours may already have executed the payload. The damage was done before the alarm was raised. The organisations that were compromised did nothing wrong. They installed a package they had been using for years, from the same registry they always used, at the same version tag they always trusted. For most downstream users, there was no practical pre-install signal that would have told them the release was malicious.1112
The attackers did not target Axios because they cared about HTTP requests. They targeted Axios because it sits deep in countless dependency trees. The target was never the library. The target was everything downstream.11
This is the fundamental shift that most security thinking has not yet absorbed. Traditional security assumes the attacker is trying to break into your system — your servers, your credentials, your code. Supply chain attacks invert this completely. The attacker targets a package that your software happens to depend on, along with millions of other systems. Your application is not the target. It is collateral damage in an attack aimed at the dependency graph itself. You were never specifically in the crosshairs, and that is precisely why your defences did not help — because they were designed for a world where attacks are directed at you.
The catalogue
The idea that you cannot fully trust code you did not write is not new. In 1984, Ken Thompson delivered his Turing Award lecture 'Reflections on Trusting Trust,' in which he demonstrated that a compiler could be modified to insert a backdoor into any program it compiled — including future versions of itself — with no trace in the source code. The implication was stark: you cannot verify software by reading its source unless you also verified the compiler that built it, and the compiler that built that compiler, recursively. The dependency chain problem predates the modern package ecosystem by decades.13
What has changed is scale. The event-stream compromise in 2018 was the template: a disengaged maintainer, a friendly new contributor, a targeted payload hidden behind a dependency.6 But the blast radius was constrained because the malicious code was aimed at a specific cryptocurrency wallet application. Everything that followed expanded the ambition.
In 2021, the Codecov bash uploader was modified to exfiltrate CI environment variables for months, silently harvesting secrets from customer build environments.14 That same year, the ua-parser-js npm package — a library used at enormous scale — was hijacked after an attacker buried password-reset notifications in an email flood, then published versions containing a cryptominer and the DanaBot banking trojan.15
Log4Shell in December 2021 was not a deliberate supply chain attack, but it exposed the same structural vulnerability from a different angle. A single deeply embedded logging dependency suddenly became a global emergency, forcing organisations to inspect dependency trees they often did not understand well enough.16
In January 2022, the maintainer of colors.js and faker.js deliberately sabotaged both libraries in protest against corporations profiting from his unpaid work. He added an infinite loop to colors.js that printed gibberish endlessly and purged all functional code from faker.js. Thousands of applications broke immediately. The industry's response was to suspend his GitHub account.17 In 2024, a Chinese company acquired the polyfill.io domain and began serving malicious JavaScript to more than 100,000 websites.18
I have written previously about the maintenance burden of dependency chains — the version conflicts that consume days of engineering time, the upgrade treadmill that never stops, the silent accumulation of technical debt as abandoned packages calcify in your dependency tree. That article concerned the accidental cost of dependencies: the friction, the complexity, the weight. What the catalogue above demonstrates is that those same structural properties — the ones that make dependency management tedious — are also the ones that make the dependency chain the most efficient attack surface in modern software.
The transitive depth that creates version conflicts also creates invisible trust relationships with maintainers you have never evaluated. The package registries that make installation effortless also make malicious distribution effortless. The automated CI/CD pipelines that accelerate delivery also execute arbitrary code from those registries without human review. The culture of micro-dependencies that produces deep dependency trees for simple applications also produces layer after layer of potential compromise, each maintained by someone whose identity, operational security, and susceptibility to social engineering you know nothing about. Every property of the modern dependency ecosystem that creates maintenance burden simultaneously creates attack surface. The difference is that maintenance burden costs you time. A supply chain attack costs you everything.
The defence paradigm that no longer works
For most of the history of software security, the operating assumption has been straightforward: you defend your perimeter. You audit your code. You pen-test your servers. You monitor your network traffic. You hire security engineers who understand your systems. The threat model assumes that an attacker is trying to get into your environment through your front door — an exposed API, a misconfigured server, a phishing email aimed at your employees. Your security budget buys firewalls, intrusion detection, vulnerability scanners, and penetration tests — all of which assume that the threat is outside and trying to get in.
Supply chain attacks demolish this assumption. The threat does not enter through your door. It arrives as a dependency you explicitly chose to trust, written by people you never evaluated, often maintained by volunteers or tiny teams whose operational security you have no visibility into. Your firewall is irrelevant because the threat does not traverse your network boundary. Your code review process is irrelevant because the malicious code is in a library your reviewers treat as trusted third-party infrastructure. Your penetration testing is irrelevant because the pen testers are testing your application, not the packages it was built on. The threat was already inside your perimeter before you wrote your first line of code, shipped as part of the foundation you built on.
This is not a refinement of the old threat model. It is a fundamentally different category of risk, and it requires a fundamentally different approach to defence. The old question — 'is my code secure?' — is necessary but no longer sufficient. The question that matters now is 'are the packages I have never read secure?' And the honest answer, for most organisations, is: I do not know, and I have no reliable way to find out.
The statistics make the current state painfully visible. Sonatype's reporting has shown that organisations keep consuming vulnerable components even when fixed versions already exist.19 The problem is not that fixes are unavailable. The problem is that organisations do not know what they depend on, do not monitor what they depend on, and do not update what they depend on. The dependency chain is not undefendable. It is undefended.
What can actually be done
The defence paradigm must shift from 'protect what I own' to 'verify what I consume.' This is a harder problem than perimeter security — substantially harder — but it is not an impossible one. The shift requires action at every level: individual developers, project teams, organisations, and the industry as a whole. And it requires honesty about which measures reduce risk meaningfully and which provide the illusion of security without the substance.
At the project level, the most impactful change is also the simplest: know your dependency tree. Not the packages you directly import — the full transitive graph, every level deep. Most teams cannot answer the question 'what packages does our application depend on?' without running a tool they have never run before. That ignorance is the foundational vulnerability. You cannot defend what you cannot enumerate, and you cannot enumerate what you have never examined. Run npm ls --all, pip list, mvn dependency:tree, or whatever your ecosystem equivalent is, and confront the reality of what you have built on. The number will be larger than you expect. Many of the packages will be unfamiliar. Some will have been abandoned. That is the surface you need to defend.
Pin your dependency versions. Lockfiles exist for a reason, and that reason became viscerally clear when the Axios compromise published a malicious version tagged as latest. If your CI/CD pipeline resolves dependencies at install time against floating version ranges, you are trusting that every package in your tree will be safe at the exact moment your pipeline runs. That trust was violated within hours on 31 March 2026.11 Pinning versions does not prevent supply chain attacks, but it gives you control over when you adopt new versions — and that control gives you time to verify before you consume.
Treat dependency updates as security operations, not maintenance chores. When a dependency publishes a new version, the question is not just 'does this break our tests?' The question is 'what changed, who changed it, and why?' Review changelogs. Check whether maintainership has changed. Look at the diff if the package is small enough. For critical dependencies — the ones that handle authentication, cryptography, network communication, or build tooling — this review should be as rigorous as a code review of your own security-sensitive code. It is slower. It is more expensive. It is also the only way to catch a compromised release before it reaches production.
Audit your CI/CD pipeline's access to publishing credentials. The Shai-Hulud worm propagated specifically because publish-capable npm tokens were reachable from CI environments executing third-party install scripts.810 Segregate publishing credentials from build environments. Use short-lived tokens with minimal scope. Require multi-factor authentication for package publishing. If your publishing workflow allows a postinstall script from any transitive dependency to read your npm token and publish new versions of your packages, your entire portfolio of published packages is one compromised dependency away from becoming an attack vector for everyone who depends on you.
Vendor critical dependencies when the trust chain is too long or too opaque to verify. Vendoring — copying a specific version of a dependency directly into your repository — eliminates the registry as an attack vector for that package. You lose automatic updates, which means you accept the burden of manual maintenance. But for dependencies that sit on critical paths — compression libraries, authentication modules, cryptographic primitives — the control is worth the cost. The XZ Utils backdoor existed only in release tarballs, not in the git source. An organisation that vendored liblzma from the git repository rather than the tarball would not have been affected.4
Monitor for behavioural anomalies, not just known CVEs. Signature-based detection is necessary but structurally inadequate against novel attacks. The signal that caught XZ Utils was a 500-millisecond latency increase. The signal that caught Axios was unusual release activity detected by StepSecurity's monitoring.511 Build the organisational muscle to notice when something feels wrong: unexpected CPU usage after a dependency update, new network connections from a build server, a package that has been stable for two years suddenly adding a new dependency. These anomalies may have innocent explanations. They may not. The practice of investigating them is itself a defence.
At the organisational level, institutional responses are emerging. Executive Order 14028, published in the Federal Register on 17 May 2021, set in motion US federal SBOM and software supply chain requirements for government procurement.20 The European Union's Cyber Resilience Act entered into force on 10 December 2024 and applies in stages, with the main regime applying from 11 December 2027.21 The OpenSSF has produced Scorecard, SLSA and Sigstore to help teams assess project hygiene, build provenance and release signing.22 These are the right direction — you cannot defend against what you cannot enumerate, and SBOMs at least force the enumeration. But the limitations should be stated honestly. An SBOM is a list, not a defence. Provenance and signatures are useful, but they do not automatically save you from a compromised maintainer operating inside the legitimate release path. Compliance is not security. It is the beginning of a conversation about security.
And then there is the deepest problem — the one that no framework, scanner, or regulation can solve on its own. The economics of open source still leave critical maintenance work underfunded. Survey data and follow-on reporting continue to show that many maintainers are unpaid, under pressure, and considering leaving their projects altogether.2324 The cheapest long-term security investment the software industry could make is still the least glamorous one: pay the people who maintain the code you rely on. If the maintainers of critical packages are exhausted, isolated or financially unsupported, that becomes part of your threat model too.24
This is not an abstract policy observation. The XZ incident showed how social pressure can accumulate around an overburdened maintainer. The event-stream compromise showed how casually abandoned projects can change hands. The larger lesson is that maintainer health and maintainer security are not separate issues; they are the same problem viewed from different angles.1624
Andres Freund noticed that SSH connections were taking half a second longer than they should have. Not a vulnerability scanner. Not a software bill of materials. Not a regulatory framework. A single engineer, running a benchmark on a pre-release system, whose instinct told him that something felt wrong. The entire apparatus still depended, in the decisive moment, on a curious human noticing that something was wrong.5
That should concern everyone who builds software. Your application does not need to be specifically targeted to be compromised. It shares a dependency graph with every other application that installs the same packages. An attacker who poisons a library deep in your transitive dependency tree has poisoned you, and they did not need to know you existed to do it. You are not the target. You are the blast radius.
The dependency chain does not care who you are. It does not care what you build or how carefully you write your own code. It only requires that you exist within the graph — and modern software now depends on vast numbers of open source components maintained by surprisingly small groups.719 The defences that worked when attacks were directed at specific targets do not work when the attack is directed at the infrastructure itself. The paradigm must shift. The shift requires better tooling, better monitoring, better institutional frameworks — and a fundamental change in how the industry values and funds the maintainers it depends on. The technical solutions exist. The economic will to deploy them does not. That gap is the vulnerability, and it is widening.
Five hundred milliseconds. That was the margin. Next time, there may not be an Andres Freund.
Footnotes
-
Collin, L., Ens, D., Kumar, J., and others. (2022). 'XZ for Java' thread, xz-devel mailing list, May-June 2022. ↩ ↩2 ↩3 ↩4
-
Akamai. (2024). 'XZ Utils Backdoor: Everything You Need to Know, and What You Can Do.' Akamai Blog. ↩ ↩2 ↩3
-
Datadog Security Labs. (2024). 'The XZ Utils Backdoor (CVE-2024-3094): Everything You Need to Know, and More.' Datadog Security Labs. ↩
-
JFrog. (2024). 'XZ Backdoor CVE-2024-3094: All You Need to Know.' JFrog Blog. ↩ ↩2 ↩3 ↩4 ↩5
-
Freund, A. (2024). 'backdoor in upstream xz/liblzma leading to ssh server compromise.' oss-security mailing list, Openwall. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7
-
Snyk. (2018). 'A Post-Mortem of the Malicious event-stream Backdoor.' Snyk Blog. See also npm Blog Archive (2018), 'Details about the event-stream incident.' ↩ ↩2 ↩3
-
Linux Foundation and Harvard LISH. (2024). 'Census III of Free and Open Source Software.' Linux Foundation. ↩ ↩2
-
Datadog Security Labs. (2025). 'A Runtime Security Approach to Detecting Supply Chain Attacks.' Datadog Security Labs. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7
-
CISA. (2025). 'Widespread Supply Chain Compromise Impacting npm Ecosystem.' CISA. ↩ ↩2 ↩3
-
Microsoft Security Blog. (2025). 'Shai-Hulud 2.0: Guidance for detecting, investigating, and defending against the supply chain attack.' Microsoft. ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
StepSecurity. (2026). 'axios Compromised on npm: Malicious Versions Drop Remote Access Trojan.' StepSecurity Blog. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9
-
Datadog Security Labs. (2026). 'Compromised axios npm package delivers cross-platform RAT.' Datadog Security Labs. ↩ ↩2 ↩3 ↩4
-
Thompson, K. (1984). 'Reflections on Trusting Trust.' Communications of the ACM, 27(8), 761-763. ↩
-
Codecov. (2021). 'Post-Mortem / Root Cause Analysis (April 2021).' Codecov. ↩
-
Sonatype. (2021). 'Popular npm Project Used by Millions Hijacked in Supply-Chain Attack.' Sonatype Blog. ↩
-
CISA. (2021). 'Mitigating Log4Shell and Other Log4j-Related Vulnerabilities.' CISA. ↩
-
BleepingComputer. (2022). 'Dev corrupts npm libs "colors" and "faker," breaking thousands of apps.' BleepingComputer. ↩
-
Sansec. (2024). 'Polyfill.io Supply Chain Attack.' Sansec. ↩
-
Sonatype. (2024). '2024 State of the Software Supply Chain: Optimization.' Sonatype. ↩ ↩2
-
Executive Office of the President. (2021). 'Improving the Nation's Cybersecurity.' Federal Register. ↩
-
European Union. (2024). 'Regulation (EU) 2024/2847 (Cyber Resilience Act).' Official Journal of the European Union. ↩
-
OpenSSF. (2024). 'Scorecard,' 'SLSA,' and 'Sigstore.' Open Source Security Foundation. ↩
-
Tidelift. (2021). 'The 2021 Open Source Maintainer Survey.' Tidelift. ↩
-
Socket. (2024). 'The Unpaid Backbone of Open Source: Solo Maintainers Face Increasing Security Demands.' Socket Blog. ↩ ↩2 ↩3
