Windows NT services are assigned an identity (SID) based on an SHA-1 hash. We also know that SHA-1 is deprecated due to research showing that it is vulnerable to collision attacks from well-funded opponents. What does this mean for Windows NT services?
Some Raymond Chen to kick off the week.
What is more interesting to me than the odds of a successful attack, are how difficult would it be to change the hash algorithm to a better one, less vulnerable to attack?
I mean its Raymen freaking Chen, so I have a high degree of respect for his acumen and judgment. And this is windows, so not the simple small things I typically work on. However, this is one of my pet peeves with devs and security. A weak hash or cyrpto algo is pointed out in a design or product, and immediately the dev goes int CYA mode and tries proving their system is safe. Maybe it is, but I’m not accepting the proof of the guy who wrote it and has an emotional and egotistical attachment to it. How long would it take to just do it the right way? Oh ok, five minutes? Why not just change it now and the argument is over?
Bill Shooter of Bul,
We should concede that the odds of collisions are very slim, but it kind of irks me too that it was designed on a probabilistic algorithm in the first place. There are some places where hashing makes sense, but IMHO this is not one of them.
Primary Key = hash ( value )
It would make a whole lot more sense just to use the underlying value as the primary key in the first place. I guess they were thinking they’d save a few bytes by using a hash, but computers are more than capable enough to identify services by their actual identifier rather than a hash of a it. If need be they could have used a surrogate key, as is common in databases.
All in all I think over-engineering is the cause of this problem (albeit mostly academic),
Well, it really depends how the SID hashes get used. They may be relied on not only for storing services but also service related RPC used to control local/remote services. I’m not sure about this, but conceivably the userspace tools could be using the service hash ID under the hood.
https://www.windows-commandline.com/start-stop-service-command-line/
Also, to address production needs, you have to consider not merely updating the code to use a new hash algorithm+structures, but also you need to update the actual data that uses the old hases+structures (be it in the registry or where-ever). If you cannot accomplish this operation atomically (for whatever reason, say there are hundreds of PCs on the network), then you could end up with different machines running different versions of the protocol, and needing to support multiple versions simultaneously, which could open up it’s own can of worms.
So although I really don’t know the details, I can see how it could be complicated.
Its understood that its non trivial for windows to update this hash. I would have appreciated a comment on the feasibility, simply because that would be more interesting to figure out how to do it in such a complex environment with such history behind it.
But the rest of my comment was complaining about devs lazy attitude towards cryptography. “I can’t break the thing I created after trying for five seconds, therefore its perfect” mentality. I mean if its non security focused area, then fine, you just get a bug somewhere else.
It’s not so much laziness, as risk/reward analysis. The risks of keeping the SHA-1 hash are very very small, astronomically small. The hash can’t be attacked offline, as the LUID salt is generated at logon.
There may be multiple places in the code where a fixed SHA-1 sized buffer is used to hold the hash, all of which would have to be found and changed. That might be non-trivial, as I don’t know whether they’re used outside the NT kernel and security system or not.
There are just bigger security holes that are worth spending time on, lower hanging fruit.
And in the general (non-Microsoft) case, resources are better spent on the low hanging fruit, because that’s what gets the attention of the attackers as well. Not everyone has Microsoft’s resources.
But then again, not everyone would have tied themselves to a single hash algorithm either, and would have instead used a pluggable hash with variable blob sizes in the code. After all, a security token blob is just that, a blob, and it either matches or not, so it doesn’t matter how big it is. But that’s a different axe to grind.