As some of the dust around the xz backdoor is slowly starting to settle, we’ve been getting a pretty clear picture of what, exactly, happened, and it’s not pretty. This is a story of the sole maintainer of a crucial building block of the open source stack having mental health issues, which at least partly contributes to a lack of interest in maintaining xz. It seems a coordinated campaign – consensus seems to point to a state actor – is then started to infiltrate xz, with the goal of inserting a backdoor into the project.
Evan Boehs has done the legwork of diving into the mailing lists and commit logs of various projects and the people involved, and it almost reads like the nerd version of a spy novel. It involves seemingly fake users and accounts violently pressuring the original xz maintainer to add a second maintainer; a second maintainer who mysteriously seems to appear at around the same time, like a saviour. This second maintainer manages to gain the original maintainer’s trust, and within months, this mysterious newcomer more or less takes over as the new maintainer.
As the new maintainer, this person starts adding the malicious code in question. Sockpuppet accounts show up to add code to oss-fuzz to try and make sure the backdoor won’t be detected. Once all the code is in place for the backdoor to function, more fake accounts show up to push for the compromised versions of xz to be included in Debian, Red Hat, Ubuntu, and possibly others. Roughly at this point, the backdoor is discovered entirely by chance because Andres Freund noticed his SSH logins felt a fraction of a second slower, and he wanted to know why.
What seems to have happened here is a bad actor – again, most likely a state actor – finding and targeting a vulnerable maintainer, who, through clever social engineering on both a personal level as well as the project level, gained control over a crucial but unexciting building block of the open source stack. Once enough control and trust was gained, the bad actor added a backdoor to do… Well, something. It seems nobody really knows yet what the ultimate goal was, but we can all make some educated guesses and none of them are any good.
When we think of vulnerabilities in computer software, we tend to focus on bugs and mistakes that unintentionally create the conditions wherein someone with malicious intent can do, well, malicious things. We don’t often consider the possibility of maintainers being malicious, secretly adding backdoors for all kinds of nefarious purposes. The problem the xz backdoor highlights is that while we have quite a few ways to prevent, discover, mitigate, and fix unintentional security holes, we seem to have pretty much nothing in place to prevent intentional backdoors placed by trusted maintainers.
And this is a real problem. There are so many utterly crucial but deeply boring building blocks all over the open source stacks pretty much the entire computing world makes use of that it has become a meme, spearheaded by xkcd’s classic comic. The weakness in many of these types of projects is not the code, but the people maintaining that code, most likely through no fault of their own. There are so many things life can throw at you that would make you susceptible to social engineering – money problems, health problems, mental health issues, burnout, relationship problems, god knows what else – and the open source community has nothing in place to help maintainers of obscure but crucial pieces of infrastructure deal with problems like these.
That’s why I’m suggesting the idea of setting up a foundation – or whatever legal entity makes sense – that is dedicated to helping maintainers who face the kinds of problems like the maintainer of xz did. A place where a maintainer who is dealing with problems outside of the code repository can go to for help, advice, maybe even financial and health assistance if needed. Even if all this foundation offers to someone is a person to talk to in confidence, it might mean the difference between burning out completely, or recovering at least enough to then possibly find other ways to improve one’s situation.
If someone is burnt-out or has a mental health crisis, they could contact the foundation, tell their story, and say, hey, I need a few months to recover and deal with my problems, can we put out a call among already trusted members of the open source community to step in for me for a while? Keep the ship steady as she goes without rocking it until I get back or we find someone to take over permanently? This way, the wider community will also know the regular, trusted maintainer is stepping down for a while, and that any new commits should be treated with extra care, solving the problem of some unknown maintainer of an obscure but important package suffering in obscurity, the only hints found in the low-volume mailing list well after something goes wrong.
The financial responsibility for such a safety net should undoubtedly be borne by the long list of ultra-rich megacorporations who profit off the backs of these people toiling away in obscurity. The financial burden for something like this would be pocket change to the likes of Google, Apple, IBM, Microsoft, and so on, but could make a contribution to open source far greater than any code dump. Governments could probably be involved too, but that will most likely open up a whole can of worms, so I’m not sure if that would be a good idea.
I’m not proposing this be some sort of glorified ATM where people can go to get some free money whenever they feel like it. The goal should be to help people who form crucial cogs in the delicate machinery of computing to live healthy, sustainable lives so their code and contributions to the community don’t get compromised. This means not just doling out free money, but also helping people connect to the therapists, doctors, debt restructuring experts and whatever other specialists we all sometimes need in our lives to help us get back on track.
I’m not going to pretend to know how something like this should be set up, organised, or run, and this article and suggestion are more borne out of frustration with how we’re letting these crucial and hard workers hang out to dry and fend for themselves, but it’s obvious the industry needs to do something. If we don’t, we’re going to be seeing a lot more of the kind of orchestrated, sophisticated attacks like the one xz just experienced.
Open source is more than just code, and it’s about damn time we acknowledge that.
Thom Holwerda,
This is a very interesting story and I appreciate the deep dive into it. But is there evidence it is a state actor? Every post that mentions this seems to assume it without evidence as far as I can tell. It seems questionable to me that a state actor would go to such lengths to get an insider in place only to fail due to a shoddy exploit that was detected by someone who wasn’t even looking for it. That’s an amateur job, am I just overestimating the capabilities of state actors?
I agree, it’s hard to detect someone’s true intent, especially if they do a “good” job.
Helping out humans with human problems, how thoughtful 🙂
Serious, that’s something we as a society need to work on. Sadly though this is rather taboo (not sure about there, but at least here in the US). Unfortunately you might have more trouble finding employment if you were to admit a mental health crisis and potential employers got wind of it.
No kidding. There’s no end to their greed. They don’t give a crap about exploiting society as long as they can profit. Meanwhile their taxes are lower than everyone else’s and the federal deficit keeps growing despite a robust economy because they’re not paying their share. Our social services are going bankrupt and it’s 100% their fault. Meanwhile they’ve gaslit enough of society to keep voting regressively.
I honestly think this type of insider espionage is more common than we realize, not just in open projects but commercial ones too. Corporations and governments have the means to embed spies that essentially have two jobs, one open, and one covert. Although the context here is backdoors, the risk is actually larger than that since insiders can leak secret crypto keys, which are at the heart of all our infrastructure. For example I think it’s extremely likely that state actors have gained access to private CA certificate keys used to secure websites (thereby enabling MITM attacks) and code signing certificates.
“I honestly think this type of insider espionage is more common than we realize, not just in open projects but commercial ones too.”
Well, no. This is just refusing to admit open source was less secure in this case. Commercial requires real names, and has real coworkers reviewing the code. This particular attack would not be very viable in a closed source shop as the more experienced coworkers would likely see through the attack immediately, and then espionage charges would stick to real names.
dark2,
Well, no. You just assume all of that happens even though it often does not. Most of my work is in commercial setting. What you typically have is islands of code sometimes maintained by small teams or even individuals. Unless the companies goes out of their way to conduct regular audits then you’ve got the exact same issue.
That’s egotistical bullshit. FOSS developers are experienced. These same problems are pervasive in the corporate world and the truth is there are tons of vulnerabilities are found in proprietary code too. Some of the worst practices I’ve seen exists in commercial spaghetti code, much of which gets written by employees who come and go and eventually they’re long gone and nobody can really vouch for its entirety. I think it might be worse on average because the code is written without the expectation that anyone outside the company will see it or criticize it.
I already mentioned this earlier, but I believe actual spies working as legitimate employees would be extremely difficult to detect because 1) everyone makes mistakes, 2) exploits can be masked as mistakes, 3) privileged security information can be gathered without actually making changes.
>real coworkers reviewing code.
LOL no, if something like this were deployed at any of the companies I have ever worked with it probably would never be found, especially since its hidden from git and only exists in the tarball.
Microsoft and Apple don’t hire idiots, shady stuff isn’t going to pass like it did at your old jobs. Lots of people with phds and real world experience to fool somehow.
dark2,
There are experienced people working in FOSS and there are inexperienced grads working at these companies right out of college. Regardless, even those with experience get overloaded by work demands, make mistakes, can misplace trust in coworkers, etc. Large companies still get breached or “allow” exploits to get into their code, hell even the NSA.
https://apnews.com/article/att-data-breach-dark-web-passcodes-fbef4afe0c1deec9ffb470f2ec134f41
https://www.cbc.ca/news/business/apple-security-flaw-full-control-1.6556039
https://firewalltimes.com/google-data-breach-timeline/
https://www.microsoft.com/en-us/security/blog/2023/07/14/analysis-of-storm-0558-techniques-for-unauthorized-email-access/
These slip ups happen everywhere in FOSS and proprietary projects alike.
I’d like to quote cpcf: The vulnerability starts with human hubris.
You can’t see what happens in commercial software development, so it’s easy to assume it all goes perfectly. That’s the corporate line after all – “we are efficient” blah blah. I do work in commercial software, and I can tell you, there are FEWER eye balls checking anything at any given time, and most code it entered just to get it done before launch or lunch. If you’ve ever wondered why so much software sucks so much more today than it did 10 or 20 years ago, now you know. Nothing is a passion project in commercial software, and there’s not enough time or budget to validate a damn thing.
This is the rosiest of rose tinted takes.
Lol no. I once worked at a supermarket chain, on the cash register software. Built in a feature that when my wife scanned her customer card, a message saying “I love you darling” was printed on the receipt. Could’ve easily made it give her a 50% discount if I’d wanted to. Nobody understood that code but me. We didn’t have code reviews, and even if we did, nobody would’ve noted that code being new (we also had little to no version control). You clearly have no idea how corporate IT works (and this was code maintained on-site, with all the off-shoring to India, who nows what happens?).
This is a very interesting story and I appreciate the deep dive into it. But is there evidence it is a state actor? Every post that mentions this seems to assume it without evidence as far as I can tell. It seems questionable to me that a state actor would go to such lengths to get an insider in place only to fail due to a shoddy exploit that was detected by someone who wasn’t even looking for it. That’s an amateur job, am I just overestimating the capabilities of state actors?
I completely disagree with you. This certainly wasn’t an amateur job. The complexity of the installer is something that not very much people can do. Also, the fact that the detection was by pure chance after so many time “in the wild” demonstrates that it was a very professional job: the ssh daemon was just slightly slower (less than half a second) than usual. It would call nearly nobody’s attention, and the majority of those who noticed it, probably would dismissed it as just “some change that added extra paths” or similar, or after trying to debug it, would have desisted due to the complexity of the task (IIRC, GDB couldn’t be used to detect it; only the valgrind problems pointed out to “something odd”). The good luck was that the one who noticed it was very decided to find what was happening.
rastersoft,
I understand that detection can happen by pure chance, but it was not exactly “pure chance” that the code had side effects calling attention to it. So much so that a developer who had no relation to the project got motivated to investigate further.
A pro wouldn’t (or shouldn’t) have left this to “luck”. They had the project to themselves and nobody was looking! The infiltration succeeded, but unforced errors and poor hack execution blew their cover. Not only was the backdoor discovered, not only was it obviously a backdoor, but the suspicion has put everyone on alert and now there’s a “manhunt”. Until there’s more evidence, it’s all speculation. The hack itself doesn’t convince me it’s a state agency. If it is a state agency, then honestly I’d be forced to lower my opinion on the lower bounds for proficiency of state actors.
State actor fits the narrative right now, as well as the dedication and commitment of resources as well as assumptions of who benefits from such an act.
That being said, there is absolutely no proof of anything besides the made-up names on the commit logs. Additionally, whoever is responsible may be leveraging pre-existing biases and allowing us to assume the actor to hide their true identity.
There’s this thing on Mastodon where people are analyzing timestamps and commits going back a couple of years and anyway all we know is it is someone or a group who is 1) dedicated to implementing something nefarious over several years and 2) confuses Mandarin and Cantonese characters.
I’m not a big fan of the idea that one isolated incident represents a big trend that needs a foundation and lots of big tech money to cure all of the ills it represents. First of all, that big tech money comes with lots of strings attached and lots of undesirable agendas. Secondly, there are always projects that are becoming older and less maintained or less used and that are being replaced by newer projects. In this case, there are newer compression algorithms that should be considered. If we set up a foundation to artificially lengthen the life of certain projects then that will create its own issues. Some projects need to die a natural death. In fact, I would argue that all software projects at some point need to die a natural death (or be retired or replaced), or otherwise innovation gets stifled.
As a lone maintainer for my own projects, I firmly believe that this sort of thing is why, now more than ever, we need nanoprocess isolation.
(Basically, using something like WebAssembly to individually sandbox each dependency within a project to make exploits much more difficult to pull off.)
The very nature of OSS is it’s dependent on volunteering.
We want all the code and functionality for free (both types). The foundation you suggest would open itself up to the same problems of exploiting these volunteers who have been open about their personal problems.
Take the scenario, the maintainer of ssh-keygen goes to the foundation beacuase the need a break because of personal issues, so this super-generous maintainer from xz steps up and offers their time.. The original maintainer’s perspective is they have developed a level of trust of the xz maintainer, allowing them to be more involved in the project moving forward.
Suddenly you’ve escalated access to key components allowing for more diffuse backdoor systems to be developed.
Even if This exploit turns out Not to be a state actor, I’d bet this is a tactic being actively deployed by any number of nation/states.
Adurbe,
I agree with this. even if this is not a state actor, I find it likely that there are state actors who do this. Some percentage of the known vulnerabilities (sql injection, remote code execution, etc) that are discovered may be planted there. Yet because of ambiguity surrounding intentional vs accidental, we don’t know the extent to which state actors (and others) are involved.
https://www.exploit-db.com/
There’s are so many known bad practices in the wild that we need to put an end to: Concatenating sql values, failing to sanitize input, unsafe memory access, failing to escape parameters, failing to check overflow, and so on. We’ve known about these for so many decades and yet they continue to happen with regularity. We must do better, but every single generation seems to have some misplaced faith that humans can use unsafe languages/structures/APIs safely. The good news is that we can build safe languages and APIs, we just need to commit to using them and killing off unsafe code.
If/once we ,manage to do this, it will take away a lot of opportunities for nefarious entities to hide their exploits in plain sight.
I need edit buttons back
Me too!
I hope this spurs distributions to add much more logging to their build process to help catch this kind of thing. If every exec call and its arguments were logged during the build, this kind of thing would be easier to catch but it would require a lot of specific tooling and manual labour in addition to automated checks for suspicious patterns and changes in the build process.
I wonder too if Linux will now be reconsidering the change/clarification they made to their submission rules last year requiring real names. It seems to me that people submitting code to something so critical to security ought to have a real world identity that can be shaken down, potentially in a court of law. Doubly so since as far as I can tell, the motivation for the change in the first place was to enable Hector Martin’s weird vtuber alter ego “Asahi Lina” to commit as a supposedly separate individual.
Jeeves,
FOSS projects already log all commits. the problem is distinguishing the good commits from the bad ones.
Who defines the test cases that identify suspicious patterns? Ironically it was test cases themselves that contained the exploit here. One way or another more auditing is needed and it needs to be done by an independent team. Larger projects like the linux kernel can designate individuals to sign off on patches. It’s unclear to me how vulnerable they are. but even so for thousands of small projects including xz comprised of just one or couple of individuals: who’s going to do the audit work? Few developers actually want to do this. If you open it up to volunteers we’re right back to square one.
It could be procedurally safer to assign developers to “audit duty” than allowing someone to volunteer, which makes it easier for nefarious entities to place themselves where they want to be. But I have my doubts that this would go over well, haha.
It’s normal for paid employees to go through a lot of paperwork, government id’s, employment contracts, etc. But for an unpaid FOSS project that exists outside of normal geopoltical boundaries, this might be a put off. I am not sure if it’s a good idea or not. Honestly if I had to submit my SSN and other government documents to contribute, I doubt I would do so.
It’s good that you are trying to come up with ideas and I’m merely offering my 2c. I’m the first to admit that It’s easier to be a critic that to actually provide solutions. It’s obvious we need more auditing, but how that should work especially without more resources is unclear to me. Maybe AI” offers a hail mary, but even that tends to amplify “garbage in garbage out”.
I don’t see any model that can withstand this type of attack, I’ve witnessed analogous cases up close on projects of the highest possible security, and we all know multiple examples of the same littered through history.
The vulnerability starts with human hubris.
Fully agree.
Not sure about the human hubris part, but I do maintain more than a dozen open source projects myself – mostly as BDFL, meaning that while I am eager and happy to receive code contributions, I maintain the ultimate control and responsibility on all of them.
There have been times I felt I had stretched myself a bit thin, not because of health issues, but simply because I was busy on my day job, or taking a course, or even on holiday, right at the time when some user asked for extra features or bugfixes.
I am sure that if some helpful developer had popped up showing goodwill, availability and technical prowess, it would have been easy for him/her to gain my trust simply by being present for a few months, and be given co-maintainership a bit later.
After all trust and openness are core values of the OSS movement. Not having to deal with red tape and political games is a sizeable part of the reasons why open source is more fun than corporate development. Last but not least, after a few years in the game, it’s natural to start thinking about retirement and finding a successor in order to keep the project alive…
gggeek,
I don’t think the hubris part was referring to people like you who know their human limitations, but rather those who falsely believe something like this can’t happen to them.
The main problem is that when you’re exploiting suckers you don’t check credential or referees or prior work or previous employers during the job interview process (or get their bank account number or tax file number or … when they’re hired); so the “standard practice security hurdles” that would have made it much much harder for an unknown fake person to suddenly appear (and be trusted) do not exist with open source; and the “standard practice security hurdles” that would have made it much much easier to punish them afterwards don’t exist with open source.
The proposed foundation isn’t even attempting to solve this problem. It’s trying to solve a completely different “how can we improve the lives of the malicious imposters after we failed to detect they are malicious imposters” problem with vague “what if we make developers/imposters immortal so we can exploit them harder for longer” hand waving; which (if it doesn’t inevitably fail for financial reasons) will make the problem worse by providing additional incentives for attackers.
A state actor (or actors) can’t be stopped, if the employing state is determined enough to create a breach. A state has the power to create credentials and a back story for fictitious identities. A state has a vast amount of resources. A state can afford to play the long game.
There are a myriad of small projects that nearly have no oversight (besides the maintainer) and which are used based on built up trust over years. The problem is that modern computing is becoming more and more integrated by necessity and more and more links are forged between previously separate pieces. All of the small stuff, that is now being linked to bigger, more important stuff, is at risk to be an entry point.
I don’t see a simple solution to the problem. Linux is now big league and a valuable target. Even if a small project is not led by a worn out maintainer, if they are open to contributions, a clever miscreant can slip in stuff that looks innocuous, but is part of a breach being planned.
I just hope there are enough eyes and ears to stave off most of the attacks.