So there’s been a big security flaw in Apple’s macOS that the company fixed in 24 hours. I rarely cover security issues because where do you draw the line, right? Anyhow, the manner of disclosure of this specific flaw is drawing some ire.
Obviously, this isn’t great, and the manner of disclosure didn’t help much either. Usually it’s advisable to disclose these vulnerabilities privately to the vendor, so that it can patch any holes before malicious parties attempt to use them for their own gains. But that ship has sailed.
I’ve never quite understood this concept of “responsible disclosure”, where you give a multi-billion dollar company a few months to fix a severe security flaw before you go public. First, unless you’re on that company’s payroll, you have zero legal or moral responsibility to help that company protect its products or good name. Second, if the software I’m using has a severe security flaw, I’d rather very damn well please would like to know so I can do whatever I can to temporarily fix the issue, stop using the software, or take other mitigating steps.
I readily admit I’m not hugely experienced with this particular aspect of the technology sector, so I’m open to arguments to the contrary.
The argument is that you are placing the users of that companies products (presumably innocent bystanders in all this) at increased risk by indirectly aiding malicious parties in attacking them.
To me, the weight of this argument largely depends on how cynical you are about how far ahead/behind the security community and the companies making these products are compared with the malicious parties.
Although in this case, given the extreme ease of the attack, I think that it doesn’t matter.
Edited 2017-11-29 23:54 UTC
Then they can either pay for bug report or it’s better to sell zero day exploits to someone in need
Just because they can : https://www.wired.com/story/macos-update-undoes-apple-root-bug-patch…
I’m in favour of responsible disclosure in most cases. Security flaws affect every software, and it is not in anyone’s interests to widen the window when black hats can exploit a hole before it is patched.
HOWEVER, in this case I think it was correct to advertise it publicly for a couple reasons:
1. This is a simple patch and can be (and has been) fixed quickly so the window is small anyway.
2. It is such an embarrassing PR event for Apple that it will hopefully cause some sweeping changes in their approach to security. The positive effect from that will greatly outweigh the momentary negative effect of the public disclosure. That hole was unbelievably embarrassing for Apple.
How long has Microsoft spent time to fix their old formula editor using clever binary patching, while this was of lesser importance than getting root access with no password ?
Having an fix within 24 hours does not mean the fix will be installd within 24h for the users.
Having the fix in place and pushed/rolled out before going wide about the issue is an better alternative.
I agree in general, but sometimes companies need a kick in the ass to improve their operations.
The way it was disclosed it got a ton of publicity and you can bet Tim Cook and other high level execs made some serious inquiries and possibly people got fired.
If they had responsibly disclosed it to the security team only, it could have been quietly fixed with only a few people knowing about it and nothing in Apple would change.
In this case, the fix is rolling out automatically to all users of Mac OS High Sierra. It’s being pushed out by Apple and users are notified after it is installed. That’s what happened on both of my Macs yesterday.
If we take Apple at their word, they complained that they didn’t receive advance notice of the bug. My questions are, “How easy is it to report a bug to a company as large as Apple?” and “How do you get a company as large as Apple to respond to bugs?”
I reported a bug in Apple Maps some time ago, and Apple is STILL sending clients trying to get to my office using Apple Maps to N Main St instead of Main St…
It was reported to them in July. So they did nothing for 5 months.
Fake news. This is a bug in OSX 10.3.1 which was released Oct 30 and not present in 10.3.0.
i think you meant 10.13, 10.3 was released over 15 years ago
Can’t speak for Apple, but in my experience dealing with other large tech companies, it’s often next-to-impossible – unless you’re lucky enough to have a contact in the company, or your name is “Brian Krebs”.
For example, back in August I received two spam EMails with links to malware hosted on Google drive (both bogus purchase orders – https://drive.google.com/file/d/0B4X7BhdRq1FMejE2aFpEbGViVmc/view and https://drive.google.com/file/d/0BwKsklZY13s-a01QZUVpcUlvVDg/view). I’ve reported both links to Google, repeatedly, using every method I could think of:
– Via Spamcop (the spam EMails linking to the malware were sent through GMail)
– Via complaints sent directly to [email protected]
– Via the “Report Abuse” links that are present on both links
– Via the “I would like to report a Gmail user who has sent messages that violate the Gmail Program Policies and/or Terms of Use” page (a title that appears to have been written by a lawyer who’s primary goal was, above all else, avoid acknowledging that GMail is a source of spam)
– And when none of those worked, I took to Twitter as a last-resort
And yet both the malware links are still online, and have been since August 10th.
StephenBeDoper,
I don’t know about apple, but over the years I’ve dealt with numerous companies and there’s a pretty wide spectrum in terms of the levels of support offered to customers. Google is the absolute worst I’ve come across in the industry when it comes to customer service. They’ve done everything to distance themselves from users and they almost seem to pride themselves in having so few human support staff. If their self-service options don’t work, well good luck getting any help unless you have an inside connection.
While I often criticize microsoft over ethics and bad practices, I will say their customer support has been pretty good when we’ve contacted them about bugs, etc. Their public message boards are well staffed by generally knowledgeable engineers and they’re pretty good at finding answers.
Edited 2017-12-02 23:22 UTC
I haven’t been overly impressed by their support message boards, more often than not I’ve ended up finding solutions to Windows/Office tech support issues on other sites. And when it comes to acting on complaints of spam from their free EMail service, they’re at least as bad as Google is with GMail. Or as bad as Google was, rather, since most of the spammers using GMail (SEO & web/mobile app dev spammers from India) seemed to migrate en-masse to Hotmail/Outlook.com around the end of 2015… though I honestly can’t say if that’s because Google finally cracked-down, or simply because those spammers killed their Golden Goose by sending so much spam that it caused other providers to start applying more aggressive filtering to messages from GMail.
StephenBeDoper,
I suppose your mileage may very. The last time I contacted MS was to report a bug where bingbot was crawling URLs that were prohibited by robots.txt. I had no trouble at all initiating contact with MS tech support. Admittedly, this is such a stupidly low bar for companies, and yet I find it astonishing that multi-billion companies like google can fail at it. It’s a sign of the times I guess. I wonder, how is facebook with this?
Anyways, the first response from them was one of skepticism, and they wanted to make sure I was using robots.txt correctly. But after providing evidence I had no trouble getting it escalated to bingbot engineers and they confirmed and fixed the bug in their crawler.
I had a similar experience with a bug in their XMLHTTP object.
For comparison, I recently had an interaction with tmobile support about tmomail.net SMS gateway. Essentially many customers use it to send themselves SMS notifications:
http://www.emailtextmessages.com/
This is extremely handy when it works, but over the summer tmobile installed a new barracuda spam filtering system that was widely reported to be blocking legitimate messages. When I contacted support I got the run around, the case was escalated to a tmobile engineer who outright denied any blocking on their end, which was demonstrably false.
The community tech support forum had many “me too” posts, and they finally admitted that blocking was happening on tmobile’s end, but they officially weren’t going to fix it or allow customers to whitelist themselves. However evidently management didn’t like that tmobile’s dirty laundry had been aired like this, which culminated in tmobile deleting all of our posts and taking the technical support forum offline.
Say what you will about google, but they’d never do this because they never read the messages asking them for help, haha. or depending on how you want to look at it.
>fixed in 24 hours
Except it wasn’t. There was report on it from 13 nov https://forums.developer.apple.com/thread/79235 and tweet from 20 nov https://twitter.com/jeremydmiller78/status/932687502053380097
How biased are you towards apple?
Wow, Even the post there mentions he heard it else where before.
So responsible disclosure… Umh other people knew about this bug well in advance.
Luckily its non impactful as its ownly Macs, which no one uses for anything other than ios development, right?
I bet there’s not much room under that rock is there?
While it was on Apple’s Developer Forums on Nov. 13 there is no guarantee that anyone from Apple saw that. This is a forum for 3rd party developers and while someone from Apple will occasionally respond, it doesn’t happen frequently.
The tweet on Nov. 20 probably just didn’t get a lot of attention. If I tweeted something like that no one would care because I have few followers and none of them are in technology.
I am not excusing the bug, but there is no evidence that Apple actually did know about this prior to last Tuesday.
I’m less concerned about the disclosure issues and more worried about the total lack of quality assurance that this patently absurd yet absolutely critical security flaw suggests. As commenter “Rann Xeroxx” noted over at ZDnet (http://www.zdnet.com/article/apple-fixes-macos-password-flaw/), if Microsoft released an update that unlocked the disabled “Administrator” account and set a blank password they would rightly be crucified.
Whoever was involved in this disaster deserves to be fired. The only slightly happier part of the story is that Apple’s negligence can largely be papered over due to automatic updates.
It seems OSX supports several methods of storing the password, and if it detects a password stored using an older less secure method it replaces it with the new method next time you enter your password (as the password is one way hashed, the only time you have the plaintext to create a new hash is when a user enters it)…
The problem here was that it misidentifies the disabled account as having an old hash type, and thus assigns whatever password you entered as the new password for the account. That’s why it lets you in on the second attempt (the password is now set to whatever you specified on the first attempt).
As I understand it, you could simply enter “root” as the username and log in with a totally blank password first time, no questions asked.
The idea of responsible disclosure is 6you tell the company of the bugs you find so nerfarious persons can’t exploit the bug for illegitimate purposes. If people don’t know of the bug, they can’t exploit it. It gives the company time to fix their product before hackers can use the bugs against them.
I do agreee that there should be some sort of maximum time limit from disclosure to fixed. A month is more than long enough.
I heard that 90 days is the norm, although it can be amicably negotiated between the concerned parties.
Maybe a bug bounty program for MacOS (Apple has one but only for iOS) and higher payouts for them (the payouts for iOS bugs are a joke when you could sell those bugs in the black market for better prices) would help Apple getting those kind of bugs disclosed privately to the company.
And it is **invite only**, and most of the bounties does no concerns user privacy ( except for icloud )
Is having no password on the root account a bug or just negligent incompetence?
Both
None of those.
The problem isn’t that of having a root account with no password – it’s that a disabled account could without any type of verification be enabled.
Yes the bug was in enabling a previously disabled account and setting the password when inappropriate.
But setting a password for a disabled account may well have prevented this issue from being a problem. A random one obviously.
Defence in depth.
Not quite..
https://objective-see.com/blog/blog_0x24.html
It was an error in functionality designed to upgrade the hashing algorithm in use for storing passwords.
It’s a bit worse than that. Attempting to log into root, when it either doesn’t exist(!?) or is disabled (more likely) should not activate the root account *AND* allow you to log in without a password.
This is the third time I can think of that Apple has had a security flaw in OSX that makes elevation of privilege something trivial.
A very long time ago, a setuid script used the ‘USER’ environment variable to “confirm” you were root.
More recently, the DYLD_PRINT_TO_FILE environment variable could be exploited to edit any file on the system (including /etc/sudoers).
It’s by design. An account with an unset password (Or, more specifically, a hashed password equal to ‘*’) is considered disabled. That’s how a disabled account is signified.
Normally, that’s better than having an active account with a hard-to-guess password, but in this case, there was a rather severe bug
Older releases of BSD / Linux had something like restoring the root account and keeping it password free unless the user explicitly sets a password for it.
This still plagues some systems to this day (being local machines, servers hosting git repositories, routers, NAS with surveillance systems ..)
Things like this has to be told to people. They need to know what root account is and why it’s vital to keep it safe.
Apple used to do this. Not sure at which point they started considering this “irelevant”.
The concept of responsible disclosure is not there to protect the companies, it’s there to protect the users.
In general (not this latest silly bug), there’s physical limitations on how quickly a vulnerability fix and testing can be implemented, no matter how many billions you throw at it – during that time I’d prefer that the vulnerability in question not be known to every criminal on the planet.
Additionally, given the potential damage, I suspect that security researchers also are on a better legal footing by disclosing in this manner.
Edited 2017-11-30 10:13 UTC
Why would they? It isn’t they who created the security problem in the first place.
In a reasonable legal system pointing out a flaw isn’t a crime.
That’s nice, but we are talking about rather big and scary companies here (involving aforementioned billions for both the software producer and their costumers); responsible disclosure would be a shield for any civil procedure and hugely important in the media attention which such a procedure would garner: it’s the difference between the company being able to say “we could have prevented this, with a little time: pay up” and disclosure where the researcher can respond “you had months to fix the problem.”
Sure, the primary responsibility lies with the software companies in question, but that is of scant help when they ruin you anyways for partial culpability.
Edited 2017-11-30 12:11 UTC
Great, really, except the part where the login fix breaks file sharing….
https://support.apple.com/en-us/HT208317
TBH, what constitutes ‘responsible’ really depends on the context.
The longer it’s likely to take to figure out the root cause of the bug, the less responsible it is to publicly disclose it. In particular, this generally means that it’s more sensible to not publicly disclose a security bug in a big open-source project (like Xen or QEMU for example), because it will usually be something buried pretty deep that’s non-trivial to find and fix (yes, there are some obvious cases like the Heartbleed bug or the pathological behavior of wpa_supplicant with the KRACK WPA attack, but most security bugs in open source end up being memory management related edge-cases that are not exactly trivial to find and debug).
For a case like this though, I’d argue it would be irresponsible to have not publicly disclosed it because:
1. The nature of the bug makes it very obvious where it would be found in the code (namely, it would be somewhere in the PAM library, or the account management code), and reasonably obvious what to look for as well.
2. The exploit itself requires physical presence to effect, and only effectively grants privilege escalation to someone who is physically present. Given those constraints, it’s not as significant as many people make it out to be (if someone has physical access to your system, you’re screwed, period).
3. From a very cynical perspective, there’s not a huge incentive if it’s not publicly disclosed for a big company to actually fix it in a timely fashion because of reason 2.
Now, in addition to that, I’d argue that the fact that this got publicly disclosed may have actually helped Apple’s public image to a certain extent. Because of the public disclosure, we know when it was discovered by Apple, and while the bug itself makes their testing department look pretty bad, the fact that they fixed it as fast as they did may outweigh the tarnish from the fact that this wasn’t caught in testing.
Aside from all that though, I’m kind of curious how they ultimately did fix it. Did they special case the name ‘root’, did they special case accounts with a UID of 0 in `/etc/passwd`, or did they figure out what was ultimately allowing this to happen in the first place and fix that bug.
While I’m generally in favour of short-term responsible disclosure, the fact is this bug had been known about for some time. News of it went viral recently, but it had been talked about on Apple’s forums for a week prior, and some people knew about it before that. The information on the bug was already publicly available before it got talked about on Twitter.
At that point the horses are already out of the barn, quietly trying to close the door without anyone noticing is pointless. Going public in a big way is the right way to go at that point because it’s the only way to get Apple to fix the issue quickly.
This was an oversight that should never have happened and Apple rightly has egg on their face. Shareholders got punished too with a $3+ decrease in share price. Again this is the kind of thing that we should never see from an organization like Apple. I would never wish to understate the stupidity of letting this flaw out.
However, in order for this exploit to be used successfully without cracking a password the following had to be true:
1) The target machine had to have the guest account enabled (it is disabled by default) and/or the owner had to permit a user or users to login without a password.
–and–
2) The attacker had to have physical access to the machine or desktop sharing or ssh had to be turned on.
So if the guest user was turned off and you required users to log on with a password your security was as good as your users passwords. Which is the same as it is after the fix is in place. If an admin user password is cracked an intruder can do anything on the system sudo, including enabling root, setting a root password, wiping the machine, transferring its contents, installing malware, whatever.
If you configured the system correctly there was never a threat to your system that will not always exist.
shollomon,
You say it’s not practical, but this is how most root vulnerabilities work. Hackers usually don’t obtain root access directly off the network, they’d first compromise local user access and then from there launch a privilege escalation attack. Also local code execution vulnerabilities are more common than many people realize. All platforms have them. Even something as innocuous as a jpeg decoder can be an avenue for attack.
Browsers are popular targets.
https://thehackernews.com/2015/11/android-hacking-chrome.html
Here’s a link about a vulnerability IOS wifi network daemon, but do look at the related links below that. There are vulnerabilities for javascripts, facetime, PDF decoders, etc.
https://www.exploit-db.com/exploits/42996/
You are forgetting about the most basic attack: The normal user is only a user and not a local admin (which should be the most basic protection every IT-Department puts on its users) and now those normal users can make themselves local admin.
The fact that literally nobody ever mentions that avenue of attack anywhere proofs to me that in 2017 way too many people are still running as local admin.
A similar example: People are allowed to run their session on a server as a normal user, but can now make themselves admin and take over sessions from other users and do everything inside those users /home folder. On a shared system like that we are basically talking about a situation where 1 roque user can hijack the files of every other user, change other users passwords, login as another user and “show all passwords in their keychain”
Of course most Mac-machines are not shared systems but personal machines which makes the above far less likely to be a problem compared to a Remote Desktop/Citrix environment, but if I had to manage 250 Macs where all users were supposed to be local users instead of local admins I would have to seriously reconsider 250 clean installs
This bug is so fundamentally wrong that it makes almost all security measurements that an IT-department put into place before completely useless. A normal user could enable a disabled account (should be impossible). And that account (root) is present on every system and has the highest privileges and to enable it you only have to do the equivalence of “type root in a UAC-Prompt”
Apparently the code for this was recently changed and nobody bothered to perform the most basic check on such a high-impact piece of code, not even after the recent “showing password instead of password-hint in Disk Utility Bug”.
avgalen,
You’re absolutely right. Privilege escalation is clearly a problem when the users are legitimate but are not supposed to have admin access.
Like with shared hosting: I assign separate accounts to different clients on the shared server, but those clients are not supposed to access each other’s data. A university would be another example where privilege escalation enables users to access more than they’re supposed to.
Edited 2017-11-30 22:18 UTC
Per an article in the British Guardian newspaper, Apple broke file sharing while fixing the root password bug. See https://www.theguardian.com/technology/2017/nov/30/apple-macos-high-… for more info.
So glad I prefer Linux.
…as an exploit, because there are many stupid users.
What I am saying is that if one has the smallest amount of knowledge about securing systems, and left guest turned off and did not allow passwordless logins, then there was no additional risk.
If guest was turned on and logins with out passwords was enabled (two truly stupid things to do), then the system was wide open.
shollomon,
Just to point out the obvious here…this is incredibly insecure especially for MacOS professionals who are the most likely group to use Remote Management in the first place!
No, you don’t have any responsibility to the company if you discover a flaw in their software, but as a reasonable person I believe you do have a certain obligation to notify that company about the flaw, and from that moment on it is their responsibility to fix it as quickly as they can, and Apple did this within 24 hours which is pretty darn good.
That said, do you think you would have any moral responsibility to the people who use that software besides yourself? Do you not have any moral responsibility to anyone else for anything at all? If you do, you will wait a reasonable amount of time before making the flaw public. The definition of “reasonable” is up for debate, but in this particular case, I would not call immediately publishing the flaw while giving Apple NO time to fix it qualifies as unreasonable by any definition. That’s my opinion.