After observing a few odd symptoms around liblzma (part of the xz package) on Debian sid installations over the last weeks (logins with ssh taking a lot of CPU, valgrind errors) I figured out the answer:
The upstream xz repository and the xz tarballs have been backdoored.
At first I thought this was a compromise of debian’s package, but it turns out to be upstream.
↫ Andres Freund
I don’t normally report on security issues, but this is a big one not just because of the severity of the issue itself, but also because of its origins: it was created by and added to upstream xz/liblzma by a regular contributor of said project, and makes it possibly to bypass SSH encryption. It was discovered more or less by accident by Andres Freund.
I have not yet analyzed precisely what is being checked for in the injected code, to allow unauthorized access. Since this is running in a pre-authentication context, it seems likely to allow some form of access or other form of remote code execution.
↫ Andres Freund
The exploit was only added to the release tarballs, and not present when taking the code off GitHub manually. Luckily for all of us, the exploit has only made it way to the most bloodiest of bleeding edge distributions, such as Fedora Rawhide 41 and Debian testing, unstable and experimental, and as such has not been widely spread just yet. Nobody seems to know quite yet what the ultimate intent of the exploit seems to be.
Of note: the person who added the compromising code was recently added as a Linux kernel maintainer.
This isn’t related to the XZ flaw, but there was other recent security news that might affect the public more than SSH does…
https://arstechnica.com/security/2024/03/hackers-can-extract-secret-encryption-keys-from-apples-mac-chips/
After the spectre vulnerabilities were published, I warned that things like this would keep reappearing. The techniques we use to accelerate super-scalar processors generally leak timing information. In order to make timing attacks yield zero information, all branches in code that handles secrets need to perform the same as the worst case scenario in order to guaranteed an operation completes in constant time. Simply put, optimizations speculate future branches from past branch execution is leaky and must be avoided for secure applications.
The failure to do this consistently is what leads to vulnerabilities.
Ah. Thanks very much to Thom for clarifying this. Other news outlets and the original source itself didn’t make this clear, and I was getting worried about why no liblzma updates were appearing for Debian Stable.
rahim123,
Yes, thankfully it didn’t affect stable. It does highlight why distros that sit further back on updates are preferred for enterprise applications (only backporting critical patches).
I have not reviewed the code personally, but the article mentions that xz code is not loaded directly by openssh, rather it gets loaded by libsystemd by a patch designed to integrate openssh to systemd. It might even affect other daemons beyond sshd.
Incidentally, this is one of the gripes I’ve had with monolithic init systems like systemd creating unnecessary complexity and inter-dependencies rather than simpler standalone tools.
Incidentally, this is one of the gripes I’ve had with monolithic init systems like systemd creating unnecessary complexity and inter-dependencies rather than simpler standalone tools.
Something Pottering something not understanding and re-inventing poorly.
“Ah. Thanks very much to Thom for clarifying this. Other news outlets and the original source itself didn’t make this clear, ”
The problem is, that he has been working on xz for over 2 years now.
I assume there will be a lot of code to re-review now.
smashIt,
Yea, it’s hard to trust someone after uncovering such a deliberate backdoor. although it’s still technically possible he was innocent and his account and/or computer was compromised and the real hacker is still unidentified. If that’s the case, it could happen to almost anyone. I wonder if they can prove guilt and what happens from here? Regardless of if it can be proven or not, his reputation might be damaged beyond repair.
Presumably the real hacker(s) who created it were going to use it at some point, so I wonder if there’s any evidence of the exploit being used? They might have been waiting for it to reach stable first.
There is an online interaction that strongly suggests that the maintainer was added to the project for nefarious purposes in the first place. See this comment:
https://arstechnica.com/security/2024/03/backdoor-found-in-widely-used-linux-utility-breaks-encrypted-ssh-connections/?comments=1&post=42712467
This post-mortem of the timeline is also really revealing:
https://boehs.org/node/everything-i-know-about-the-xz-backdoor
It’s interesting, maybe there’s something there although certainly not conclusive. The behavior doesn’t seem that abnormal to me. Consider that for any issue there are many people waiting on the sidelines for an issue to be fixed and if they see progress is stalled, they’re motivated to join, criticize the current direction, and leave just as quickly when it’s fixed because it was never their goal to be a regular.
Anyway, certainly will be interesting to see if anything comes of it!
This is rather serious thing and in my opinion could have long term consequences. It’s important to fully understand the story behind it before making any judgements. Yesterday i found this: https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27
The scary part is that, they were almost able to pull off a very long con exploiting not the code, but the very social norms we had as the open source community.
A contributor starts small with documentation changes, gains trust over time, makes regular code improvements, and moves all the way to become maintainer of a relatively obscure project, nothing that would get scrutiny like the main openssl, or libc.
And then when the time is right, push a carefully orchestrated backdoor which only happens if you are building a package for certain distributions (not in regular builds) obfuscated in autoconf…
Only to be caught before it actually causes large scale damage.
This is probably one of the most sophisticated “open source supply chain” attacks by state actors, and it will be talked for a long while.
Which particular state?
People are speculating based on timezones, but I won’t carry it here.
Btw, the offending github repo is now down, so it would be difficult to do your own analysis if you haven’t already cloned it.
sukru,
I kind of doubt this…the quality of this backdoor seems to fall short of anything I’d expect from a state actor. This backdoor is obvious and explicit (at least to anyone who actually bothers looking at the test files). There’s no plausible deniability here and a subpar exploit led to it’s detection. As it’s been stated in the news, the code was only investigated because of side effects present in the backdoor code. A pro should have done better and left no clear record of malice.
The double goto vulnerability in apple’s SSL code several years back is a good example, not only because it’s clever, but even those who see it have no idea if it was done on purpose.
https://www.synopsys.com/blogs/software-security/understanding-apple-goto-fail-vulnerability-2.html
That’s the kind of plausible deniability I expect from a state actor. Of course maybe I’m completely wrong and state actors really did this. However it just seems so wasteful to succeed in getting a long term insider in place only to betray them with a low quality backdoor that would point immediately to their insider.
My tinfoil hat is firmly mounted on my head for what I’m about to suggest. Perhaps it wasn’t a state actor, but a corporate entity that seeks to profit somehow from making open source in general, and maybe Github itself in particular, appear to be overly vulnerable to attacks thanks to its open and collaborative nature? Obviously it’s not Microsoft, but who would stand to gain something from the fallout over this incident?
Please note I don’t actually believe what I just wrote, it’s simply a random fleeting thought I had. If it is a state actor it was a seriously amateur attempt. That’s something I wouldn’t put past a regime like North Korea, especially given the utter incompetence on display, but it could also be from a normally astute government agency that simply fumbled the ball on this one.
Morgan
And that is the question. Which large company could benefit from this? And also given the risks of being found out, would greenlight such an endeavor?
Given almost all past opponents of open source today actively engage in the community, including as you mentioned Microsoft, this is unlikely unless we have a more concrete evidence shows up.
Morgan,
As long as we’re putting on a tin foil show… Why not have the insider compromise the machines of other innocent developers (may be easy targets if they run lots of test code) and have the dirty work committed under their names without them noticing. Then remove the evidence of their compromise, thereby leaving a trail of evidence that points to an innocent developer. Meanwhile the true hacker remains unscathed. If they wanted to play the long game and determined to make it look convincing, then they could even manufacture a backstory and create interactions with others that won’t trigger any alarms in real time but may raise suspicions later when the developers end up under a microscope.
It really makes me wonder how often people are successfully set up? I get the impression authorities treat digital evidence as infallible when to a savvy hacker it is in fact extremely fallible if there’s any chance the data could have been manipulated.
Alfman,
It is true that it was easy to decipher the method once it is known.
However at the same time, it had significantly more sophistication than an simple (*) code change. (Okay not so simple, but you get the point).
They had to not only build rapport, but they also had to know this vulnerability (xz not being a dependency of openssh, however only introduced in some distributions which uses a tar based build system that use systemd.) Btw this is also a black mark against systemd.
Remember, we don’t actually know the identity of the attacker, but only their github alias.
(One other viable explanation is their account being hacked, or they have been coerced with use of force or similar techniques. Nevertheless it shows the significant weakness in the supply chain).
sukru,
The account that submitted the back door is easily proven but this does not strictly prove the attacker is the owner of the account as you pointed out. It the account or computer was compromised all bets are off. Ideally you’d look for a trail of evidence that goes beyond the account because otherwise assuming it might end up incriminating the wrong person.
That’s fair. Although I don’t think we should really be surprised by it. Unless you regularly perform stringent audits, then we should all concede that vulnerabilities can and do make it into our code. With unsafe C code this isn’t even an uncommon occurrence, we merely assume it was an accident and move on.
Alfman,
If interested there is a good analysis and discussion at this place:
https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27
sukru,
Geck posted the same link.
They don’t want to speculate…
…but obviously people love to speculate as demonstrated by all the speculative commenting 🙂
Alfman,
Ah… I missed that one even though it was obvious.