Fresh from winning the PWN2OWN contest yesterday, Charlie Miller has been interviewed by ZDNet. He talks about how Mac OS X is a very simple operating system to exploit due to the lack of any form of anti-exploit features. He also explains that the underlying operating system is much more important in creating a successful exploit than the bowser, why Chrome is so hard to hack, and many other things.
One of the most common questions is why Miller and his colleagues do not report their bugs to Apple or make them public. He is very honest about this, and explains it’s a simple matter of economics. There’s a market for exploits. “Vulnerabilities have a market value so it makes no sense to work hard to find a bug, write an exploit and then give it away,” Miller explains, “Apple pays people to do the same job so we know there’s value to this work.” He says he could’ve gotten a lot more money for the exploit than the 5000 USD prize he won yesterday, but he chose to enter the contest because he likes showcasing what he can do, and of course the headlines for his company.
He went on to explain that the Internet Explorer/Windows exploit found by cracker Nils is worth a hell of a lot more than his own Safari/Mac exploit “by about a factor of ten”. “You can get paid a lot more than $5,000 for one of those [IE/Win] bugs,” says Miller, “I’ve talked to a lot of smart, knowledgeable people and no one knows exactly how he [Nils] did it. He could easily get $50,000 for that vulnerability. I’d say $50,000 is a low-end price point.” He added he was very impressed by what Nils did, especially the Firefox/Windows exploit, which he gave a 10 out of 10. “It’s really hard to exploit Firefox on Windows.”
Miller repeated his claim that Mac OS X is easy to exploit. He makes a clear distinction between the browser and the underlying operating system, stating that for example while Firefox on Windows is very hard to crack, Firefox on Mac OS X is easy, because Mac OS X lacks all the anti-exploit features Windows has built-in. “The things that Windows do to make it harder [for an exploit to work], Macs don’t do,” Miller says, “Hacking into Macs is so much easier. You don’t have to jump through hoops and deal with all the anti-exploit mitigations you’d find in Windows.”
As an example, he takes his winning exploit. “With my Safari exploit, I put the code into a process and I know exactly where it’s going to be. There’s no randomization. I know when I jump there, the code is there and I can execute it there. On Windows, the code might show up but I don’t know where it is. Even if I get to the code, it’s not executable. Those are two hurdles that Macs don’t have.” He added that all browsers have holes, but that writing exploits for those holes is harder on Windows than it is on Mac OS X.
When it comes to Chrome, Miller is positive about the sandboxing technology in the browser, explaining that you need two bugs in order to create a Chrome exploit; a bug in the browser, and a bug that gets you past the sandboxing. “There are bugs in Chrome but they’re very hard to exploit. I have a Chrome vulnerability right now but I don’t know how to exploit it,” he states, “It’s really hard. They’ve got that sandbox model that’s hard to get out of. With Chrome, it’s a combination of things – you can’t execute on the heap, the OS protections in Windows, and the Sandbox.”
I just don’t know what to think of Miller and other people like him. I see no moral obligation for them to help out other companies make money off their work on finding exploits, and he’s quite right in asking why he should do the work that Apple or Microsoft employees get paid to do for free. However, at the same time, there’s also the responsibility of the general public. When it comes to open source projects, the question is a bit simpler for me.
As Miller says: “It’s all economics.”
Miller seems to take care to differentiate the difference between security of an operating system and built-in operating system preventative measures.
They are two very different things.
The fact that OS X does not have the same preventative measures Windows has like randomization, no execute bits, etc, does not mean OS X is an insecure operating system. It just means once you have a vehicle into the operating system its easier to take advantage.
It’s the difference between theory and practice. In theory, Windows should be more secure because it has all those fancy features and Mac OS X does not. However, out in the real world, Mac OS X obviously has a much better track record.
No, I don’t think that’s it at all. I think the Security of the operating system is better in OSX, so because of that they have a false sense of security which is why there aren’t as many preventative measures. Windows just flat out sucks and every one including the programmers sort of know that, so they beef up the primary line of defense with all of these preventative measures.
In the extreme it would be like complaining that Chicago is less hurricane ready than New Orleans, because Chicago doesn’t have any hurricane related building codes,or any levee’s.
In reality the situation isn’t quite that disparate. Osx really should add those prevention measures.
Who cares what you _think_?
Only because no-one care.
axe to grind with Apple? Did they refuse to give he a free laptop or something?
BSD had these security features first.
BSD is the most secure OS.
Apple, in the real world, still is the most secure desktop.
On what grounds is BSD the most secure OS, and more to the point Apple???
Unbelievable. Here is an interview with a guy who is most definitely a better programmer than pretty much anyone on OSNews, who has been doing this longer than any of us. In this interview he says “OSX is unsecure, there are almost no hurdles to jump through to take control of a system” and you say that Apple has the most secure desktop. Are you really that delusional?
At least the Windows folks can admit that it’s got security issues. Apple fanboys are a rare breed, and for them to make claims like the one above is just flat out ignorance.
With all the money they pour to buy that overpriced hardware stuff, it HAS to be the most secure 😉 Mind you that they are all excited about iPhone 3.0 OS in order to get those two advanced and innovatite features:
* cut&paste;
* sending MMS out.
lol
ops! Sorry… I know it will sound trollish and I bed your (preventive) pardon 😉
Actually, he didn’t say “take control.” He said exploit. And really that’s the issue: if the exploit is just that it can read a given site’s cookie, or that it can write a non-executable file or something, that’s not nearly as serious.
What if it stoles your card data you are used to register to buy books from Amazon? Would you feel much safer that you know that your World of Warcraft is not attacked by the browser’s exploit?
Assuming that’s what actually happens, of course not. But a “vulnerability” doesn’t mean he got access to all cookies, and it doesn’t mean he had root on the system. Although, certainly if he found a buffer overflow, which is what it sounds like, that’s not good.
However, there are plenty of vulnerabilities that don’t expose your entire system.
Also interesting that you modded me down as “inaccurate.” What exactly did I say that was inaccurate?
Thank you for my chance to say why:
Pwn2Own have the clean rules, with increasing exposure of way to attack a machine, the attacker should get user data. Your post was: is not a big vulnerability as it doesn’t do much worse as a root kit let’s say. Pwn2Own tries to expose the security leaks in security model of the user land applications, mostly the browser and it’s component as is the default application you may find in mostly every OS, even into a phone OS one, and in one way it do it really well.
The inaccuracy comes that probably the security is done by saving data, not applications. Is hard to do from userland a change of executables in Linux or using OS X limited user (or guest) or even XP limited account.
The most permissions tries to protect credentials and data and most of security tries also to do the same thing (Active Directory’s permissions or better SELinux or AppArmor force the application in case of a buffer overflow or any other break, that may happen) to save data, not applications.
So for me it seems (with no offense, as you are a OSNews webmaster and you with Eugenia creates the OSNews site and I respect this deeply on you!) that you remember the security as a flaw of OS as it was in Windows 98-XP era, when viruses (all mallware) and scamware was the biggest problem.
So based on this thoughts I’ve seen that your post seems misleading… tell me if I’m wrong.
Edit: I’m not native English speaker so I’ve did fixes here and there
Edited 2009-03-20 23:51 UTC
The BSD’s, and OpenBSD in particular, have had very few remotely exploitable bugs in a default install. IIRC, OpenBSD has only had two in over ten years now.
OpenBSD has a lot of measures no other OS uses or has started to use only recently(eg. ASLR in Win and Linux) which make the very few bugs very difficult(as in almost impossible) to exploit.
As the interviewee makes clear, it is this sort of thing, ASLR, sandboxes, stack canaries, etc. that make an attacker’s life difficult.
*BSD and MacOS X have been disregarding those features because they negatively affect performance, and now they are reaping the fruits of shame.
Bugs might be a lot or a handful, but if you do nothing to keep the attackers’ from playing around with your unpatched bugs, the game is over for you.
OpenBSD had those security features first, but OS X has relatively few of them; Unix does not have the same level of security for all it’s variants, and in fact, other than the fact that you don’t run as root most of the time, Unix is not all that secure an operating system unless the flavor adds additional security features and run secure programs. (Note how many remote root security bugs there were in sendmail(1) for example, running on the typical non-OpenBSD *BSD.) Oh well, at least Safari isn’t in the kernel and used throughout the OS by programs via DLLs like Internet Explorer, if it was, OS X would *really* be in trouble.
I get the point of your post but there is no use resorting to lying by claiming that Internet Explorer is in the kernel.
As for UNIX – what is UNIX? its a specification; there is nothing stopping any vendor from adding additional features in the case of security such as ASLR, encrypted swap, Sandboxes, etc. etc. To some how throw all ‘UNIX’ under one banner is ignorant of the fact that there is no such thing as a UNIX operating system – there are just implementations of it.
Edited 2009-03-21 00:26 UTC
I’m not lying, it was in Windows 98. Apparently this has been changed. I haven’t had extensive experience with Windows as a user since about 2002, and before then I was using Windows 95. 🙂
[/q]
That was exactly what I was pointing out, that just because OS X is Unix(r), doesn’t mean that it is secure; because Unix isn’t a terribly secure (or terribly insecure) operating system when you’re talking about just the basic specifications like POSIX and SUS, etc. It definitely can use some additional hardening and features such as ASLR to make it truly secure. (Though there’s no such thing as a “secure”, i.e. non-exploitable, mainstream OS. That includes OpenBSD.)
[q]I’m not lying, it was in Windows 98. Apparently this has been changed. I haven’t had extensive experience with Windows as a user since about 2002, and before then I was using Windows 95. 🙂 [q]
It was never used in the kernel – heck, I remember badck using Windows 95 and installing IE4, I remember Windows 98 and installing the new versions. In no way is it integrated into the kernel.
Yes, it is heavily integrated in with the GUI, but that it no ways equals integration in with the kernel.
Don’t know where your “have no way” comes from. Just because IE is installed separately so it shouldn’t be in the kernel? Do you understand what the meaning “in the kernel” is?
I don’t want to make a crude conclusion it is in or not in the kernel. But from your statement, it’s a shame to reiterate “no way” without any persuasive reasoning. Ah, you said it is integrated with GUI, do you happen to know that Window’s one infamous character is to integrating its GUI sub-system within kernel?
Is it part of the actual kernel, does it run in kernel space? no it doesn’t.
Again, you claimed that Internet Explorer is integrated in with the kernel – provide proof or retract your statement.
Don’t know you “again” what! Please read my comments again! I don’t claim anything about it is in or not in the kernel. It’s just a shame you said “no way” without and proof!
By saying “no way”, you would like to eliminate any possibility that MS put any part of IE in kernel. But you have *no way* to prove it. So put it a fair discussion, draw back your “no way”. While I am not sure if IE was and is in kernel, it’s a good habit that MS put a lot of features which can, and should be in user space (even some file parsing) into kernel. And by lacking transparency like UNIX, it’s always hard to say.
IE was never in the kernel. I don’t know where people get this kind of stuff from.
It was never in the NT kernel, i.e. not in any recent version of Windows. It was in Windows 98. Sorry.
Yeah, where exactly? KERNEL.EXE? No. vmm386.exe? Very unlikely. Yep, still a DLL somewhere. I guess since the concept of kernel was a little bit more nebulous in Win9x, you could stretch really far and say that IE was “in the kernel”, but that’s not saying much.
I found only one reference while googling that said IE was in the kernel and it was some rant page that gave no proof that IE was ever in the kernel. No Wikipedia pages gave any indication that IE was ever in the kernel (they did mention, as do any other pages on the subject, that IE was tied in with the OS, but the OS is broader than the kernel). If you can find me a legitimate link that shows IE was in the kernel in Win9x, I would greatly appreciate it.
I can’t seem to find any references that say that either. I’ll apologize therefore, and say that Windows 98 had IE too tied with the operating system instead; which is correct, and that many programs and the OS use components of IE all over the place, making a bug in IE often a bug in the OS, also correct.
Some people seem to think that because Microsoft said that it was integral to the OS and couldn’t be removed that must mean that it was in the kernel. I’d say that the Windows login is integral to the OS, but there’s no way that that’s in the kernel. I don’t see why IE would have to be in the kernel for Microsoft to claim that it was integral and unremovable.
Now, granted I don’t know what is and isn’t in the kernel in any particular version of Windows, but I’d be very surprised if an internet browser was ever in any kernel of any operating system.
It’s hard to prove the architecture of a proprietary software, especially like Microsoft. So be cautious to say *no way* to any method Microsoft would do in. Even when something are installed separately, it may in the kernel. VM software are installed separately, but it is usually part of them has to be running in kernel.
I think you do not have to be surprised if Microsoft put anything that obviously should be user-space feature into kernel, because they can and they did. They internally claim to each other it is for performance sake and sometimes that misconception even leak to marketing.
Again, I don’t know if IE is in or not in kernel, but I denounce the way too quick to say no way.
Not true. There’s a lot of difference if operating systems provide some kind of protective measure or not.
In facts, Miller didn’t say Safari is weaker than IE. If you took time to read the article, he said that EVERY BROWSER has holes and bugs.
However, while Windows (to name one) has developed some kind of protective measures to mitigate bugs and security flaws, OS X didn’t. And of course that matters. He also joked about the fact that if you want fast cash, you can just concentrate on Safari on OS X.
If I was a OS X user, I would take his words in a serious way, demanding Apple to introduce all those protections other OSes enjoy. He has a good example: Firefox on Windows is very hard to break while the same software on OS X was very easy to break.
For the records, he also stated that he considers Chrome architecture a very good starting point. The fact that Safari (which he considers the weakest) and Chrome (which he considers the strongest) share the same rendering engine is a good proof of what many people say: being open-source doesn’t automagically mean secure.
Kudos to Google guys, whose first browser is already a very strong implementation (and you guys know that I’m a IE user…)
I didn’t realize Safari was open source. If Chrome is very good and shares the same back end rendering engine as Safari and FOSS does not “automagically” mean it is more secure; is your point that Safari is open source or that the closed source wrapped around what is apparently a solid secure rendering engine is broken?
Granted, no source is going to magically be of high quality. Peer review helps quite a bit though and I don’t think that’s something Safari gets and definitely not something osX gets.
Someone mentioned OpenBSD in a previous comment though…
Safari is proprietary, not open source. The rendering engine is open source (Webkit) but the rest of the browser is as closed as IE.
That is very cool hair my friend.
That is a picture of Layne Staley of Alice in Chains during Facelift, before he cut his hair off. I wanted to pay tribute to him (God rest his soul) and decided the best picture was one before heroine destroyed his health. I probably should use a later picture (Of when he is even more famous) so people will recognize him and not mistake that for me.
A little factoid for Metallica fans. Death Magnetic is a tribute to Layne Staley and they had a photo of him in the studio during the entire recording session.
http://en.wikipedia.org/wiki/Metallica%27s_ninth_studio_album
Cool, yeah I saw that and thought “No way does someone have a rag like that these days.” All the same, my comment holds. Anyone with the balls to sport that hair deserves some recognition.
My point was that while Safari and Chrome share the same (open-source) rendering engine, result is much different. Which was a variant of what you meant when you wrote that “Granted, no source is going to magically be of high quality”, wether open or closed source, I’d say. Let’s consider that a notice to people who would solve all problems by “open-sourcing”.
Of course, Safari is not open-source.
I may be reading it backwards when you take the time that an open source license does not automatically make software better quality then point to a browser with a FOSS core engine and several layers of proprietary on top. Chrome using the same core indicates that the FOSS component of it is solid. The exploitable flaw being in the proprietary Safari layers wrapped around the core would seem to support the theory of lower quality in closed licenses.
I don’t think an open license is going to make a bad software idea magically better but when comparing general open source against general proprietary, quality looks a little suspect where peer review is lacking.
Umm, I don’t know what version of Mac OS X you are using but according to Apple’s own documentation they implement sandbox, ASLR technology, encrypted swap file and I’m sure many others people can mention. I am sure your post was due to a lack of information rather than a malicious attempt to create a flame war based on spreading false information.
You talk about features but applications written to run on top of that operating system have to take advantage of those features. The operating system can provide all the most wonderful features in the world but if the application vendors don’t use them then it is an exercise in futility trying to point the finger at the operating system vendor when it is the application vendors fault.
Back to the Safari issue; Apple make the operating system and the browser; there is no excuse as to why Apple has not used ASLR and Sandbox technology with their own products. Unless Apple takes the lead in the implementation and use of technologies in their own software then its going to be difficult for them to convince vendors to do the same.
Oh, and the reason why Apple doesn’t force the said technologies onto all software is because it will break compatibility – something people on OS News for ever whine about when it comes to their operating system upgrades and expecting their ancient and decrepit software to continue running without fault.
[quote]Umm, I don’t know what version of Mac OS X you are using but according to Apple’s own documentation they implement sandbox, ASLR technology…[/quote]
hmm.., no
Apple introduced in Leopard incomplete ASLR by refusing to randomize the location of the code, stack, and 32-bit executables don’t have heap-execute protection
only thing they did in Leopard is limited to library randomization
And most OS X apps are still 32-bit
I hope a few people at Apple read the interview and continue the work on Mac OS X by implementing some of these missing features that run behind the scenes. There’s no point NOT to implement them, getting them incorporated into the system early and having even more new features and security to boast about.
This interview is almost like free advice for them… I hope they take it. I’m almost purely a Mac user (save for a linux box or two) and I’d love to know that Apple was moving on these things!!
Well said.
Absolutely. I’d love to know that Apple was taking security and quality more seriously rather than just the pretty packaging and “think different” marketing spin. I’m all for anything that benefits the end user and improved quality defiantly does that. Heck, I have two osX boxes at home with one in daily use; I’d like those to be a little more robust and it’s not like I have more freedom then what Apple Updates delivers.
It’s wrong to suggest that Apple is somehow ignoring the evolving security climate. Known exploits are regularly patched and the underlying OS keeps getting new security enhancements like •File Quarantine •Sandbox •Package and Code Signing •Application Firewall •Non-Executable (NX) Data •Address Space Randomization
For more info, see Jordan Hubbard’s talk on the evolution of OS X at http://www.usenix.org/events/lisa08/tech/hubbard_talk.pdf
OS X doesn’t have to be the most secure OS. It just has to be secure enough to keep criminal attention focused on Windows. Just remember that security and usability are often mutually exclusive, so all vendors are forced to balance the need to not inconvenience users with the need to be secure. If that were not the case we’d all be using PGP-enabled mail clients, every web stream would be SSL encrypted, we’d all be using multi-factor authentication, all our hard drives would have full-disk encryption, etc. etc.
I’m not at all saying Apple is ignoring security. I feel happy with Mac OS X very much.
What I am saying is that companies for the most part tend to see these things as bad press (and for a legit reason as that is what everyone spins it as).
All I’m hoping is for Apple to take this and say “Hey, let’s keep moving on security and fix these things up”. The last thing I’d want to happen is have Apple come out and downplay something that is now in the open.
Companies tend to clam up when these things happen and I’d love to see them acknowledge these and get them plugged… to keep moving forward!
Selling bugs/exploits for money is pretty low…
He is almost in the same league with virus writers and people that use those exploits for their own benefits.
Edited 2009-03-20 14:27 UTC
I agree, even though I also understand his side of it. The man is obviously extremely intelligent, and should be paid for what he does best. Basically, however, what he’s saying is he’d rather make money than prevent others from getting royally screwed, a prime example of the selfish greed-motivated mentality that seems to be so prevalent today, and one that will ultimately screw us all for good. When you come right down to it, that’s pathetic.
I do not see anything wrong with his mentality. It is not his job to help out companies unless the company is paying him directly to do so. At least that is how I see it. There is nothing wrong with using your talents and making money off of it. If Apple was really that concerned about security then they would pay him big bucks to bang away at their software all day. But they dont as far as I know.
Microsoft on the other hand holds these types of security competitions every year and they are reported on the web as well and they actually pay those hackers.
Sure Apple is based on a NIX OS and therefore it has the advantage in security but thats not how I am seeing it nowadays. It seems that it is an OS with a somewhat false sense of security now? I am not saying Microsoft is better dont get me wrong. But I am saying OS X is not necessarily better especially if a hacker comes out and downright says that the OS is easy to screw up!
I always believe being paranoid is the best way to go when it comes to security. From what I am seeing Windows has definitely improved its security especially since they have had no choice but to go up lol but it seems OS X has just remained stagnant in that department but people think because its a NIX based OS they are inherently secure. It could be true to a certin extent but I would rather be paranoid than a victim.
Depends on how one uses his talents; thieves also are using their talents to make money and clearly they are doing something wrong.
My point was that a mentality like “I know a way into your machine and I will sell it to the highest bidder” isn’t something to applaud.
While this guy’s technical skills are respectable, his morale and mentality certainly isn’t.
Quite a lot of people have the skills to do bad things, yet they choose to use their skills in more constructive ways.
He didn’t sell it to the highest bidder.
“What’s the ballpark value of that Safari bug?
It was probably more than that $5,000 prize I won. It’s much less than the IE 8 vulnerability (exploited separately by Nils) by about a factor of ten. I could get more than $5,000 for it but I like the idea of coming here and showcasing what I can do and get some headlines for the company I work for (Independent Security Evaluators).”
Right, he held on the bug until the price was worth it.
I’ll rephrase my comment:
“I know a way into your machine and I will sell it for the right price”.
Still doesn’t make it the right approach.
Edited 2009-03-20 18:05 UTC
The right price would have been to hackers for 10K a year ago. He waited until a white hat would pay him.
No, he held onto the bug to display his talent and to promote the company he works for. The price was free advertisement with a MacBook and $5000 as icing on the cake.
To answer the assertion that there is something inherently wrong with wanting to charge for the exploit charlie says this:
Apple already pays people to do this. Charlie is right in saying Apple needs to pony up some cash for the exploit. He doesn’t say the exploit is for sell to the highest bidder nor does he imply it whatsoever.
http://blogs.zdnet.com/security/?p=2941
On the other hand, shouldn’t it be Apples responsibility to make sure their customers don’t get royally screwed. If Apple really cared they’d pay the money to hire people like Mr Miller. If Apple has such a lax approach to security why should other people do Apple’s job for free.
I didn’t bring Apple into this at all, I was speaking generally about my feelings concerning the way Mr. Miller handles this. My reaction to him would be the same regardless of whose product in which he found a bug, it’s still a very low and extortionist tactic, no matter how you look at it.
I Agree. I don’t care what the vendor is, the fact that this clown holds onto the exploit for a year, just so he can use it this year is BS. He’s no better than the virus writers.
Well in as much as this whole topic was about Apple I felt they’d make a good example. But feel free to replace Apple with any other software company you wish, it won’t change my argument.
That is almost like saying that the financial scandals that have occurred over the last several decades aren’t due to the individuals lack of morals but due to a lack of regulation; that some how individuals aren’t accountable for their own actions because they need regulation to guide them because they don’t have the capacity to make moral judgements on their own. Its a way of mitigating individual responsibility and transferring this responsibility ultimately to the victim.
Its like saying, “you were shot by that gentleman, its your fault for not wearing a bullet proof vest”.
Edited 2009-03-21 00:34 UTC
Yeah, God forbid anybody should ever be paid for what they’re doing. Perhaps you would like to feed his family while he works for the good of humanity; we’ll just set up a Paypal account in your name.
Amazing how out of context you can take things when you want to. If you’re going to quote me, at least have the decency to quote all relevant parts of my comment. Otherwise, you are doing nothing but twisting my words. You aren’t a politician by chance are you?
I said above the place you quoted, he certainly deserves to be paid for what he does. Or did you not bother to read that part? I’m saying that what he is doing with these exploits right now is pretty low, holding on to them so he can sell them for his own price or use them to show off. He deserves to be paid, but not to be allowed to extort. Get it?
Assuming he does this for a living, unless he’s hired by Apple (or somebody else), how else is he supposed to get paid? Even if he were willing to do it for free instead of extorting, it’s doubtful that he’d be able to put the kind of time into it that he currently does, and thus the chances aren’t as likely that he’d be able to find the exploits in the first place.
The part of your post I took issue with is when you labeled him greedy and selfish. I just think that’s a little narrow-minded especially since he could easily go to work for the opposing team and probably make more money selling these exploits to criminals, assuming he was greedy enough to do so.
He sat on the vuln for a year intentionally saving it for this competition. How many criminals found that same vulnerability in that time? How many users where left hanging unknowingly. Not even a bug report.
Wanting monetary return is one thing; we all have to eat. That suggests approaching the relevant company in a timely manner though. We want companies to view vulns and issue a patch the day after they are notified of it but that has to go both ways. This is starting to sound like Microsoft business strategy; release the “innovations” as slow as you can to maximize shareholder profits rather than user benefits… booo..
No doubt he’s smarter than me but I think the enthusasm with which he’s pushing to be paid and the decision to leave users vulnerable for a money shot perl necklace is in bad taste.
Come on Sec devs, those of us in infosec that don’t do Dev work are out here mitigating when we could have patched long ago and had safer users.
Why?
Apple decided not to release their code, why would they have a right to know the exploits other people find for them?
They’ve chosen that model, now they have to deal with the downsides.
Do you even know what you’re talking about?
Safari’s engine (Webkit) is released as fully open source –and it’s used by many other browsers, including Google Chrome.
A web browser is not just its rendering engine.
If it was, Chrome would have the same vulnerabilities as Safari.
Please learn to shut up when you don’t know what you’re talking about.
Depends on what the vulnerability is. If you can find a way to use a bug in the way that a CSS file is parsed (As an example) you’ll probably find you could craft an exploit that would work on most of the browsers that use that Webkit.
Chrome runs on Windows. The hacker was citing/implying that the randomization support in Windows is the reason Chrome gains that security.
Chrome on OS X would have that vulnerability, until 10.6 arrives.
You are wrong. The reason Chrome specifically is so secure is because of the sandboxing.
Edited 2009-03-20 18:15 UTC
You are right on target, Thom. charlie specifically says that to hack Chrome you have to find 2 vulnerabilities, one in the sandbox and one in the browser.
That’s one of the reasons, but as I’d indicated elsewhere in the other article it isn’t just about the OS itself. This is a quote from Miller:
Not necessarily I’m afraid, although OS X itself might make it easier on another level. As Miller says, finding a potential entry is one thing. Turning it into something you can actually ‘exploit’ is a different ballgame.
Incredible post.
1: you were willing to backstab your buddy Apple by claiming Safari was compromised while Chrome wasn’t only because it runs on OSX instead of Windows
2: you were wrong.
Massive footbullet.
If a vulnerability lies in WebKit, what about open source would say that Google didn’t modify WebKit?
You’re right that Safari isn’t totally open source but that doesn’t mean that the vulnerabilities aren’t in the open source portions.
I don’t use Safari on either platform because I don’t trust Apple since they don’t seem to care. Mozilla’s Firefox developers care more but there are still plenty of vulnerabilities and it’s completely open source and users still get hosed.
Since the exploit in this case is for Safari Apple are in fact releasing the code, so if were a question of reciprocity the guy has no excuse.
But availability, or “openness” if you will, of the source is not an issue here. This guy supposedly is a white hat (a.k.a. “security researcher”) and as such is supposedly trying to find exploitable holes so they can be fixed before people get harmed. Sitting on an exploit for a year so you can get a free laptop and 15 min of fame is certainly black hat and is even nearly criminal.
He is free to not research Apple’s software and get a paying gig for someone else or apply for a job at Apple.
Repeat with me: SAFARI IS NOT OPEN SOURCE.
Webkit is open source. Safari isn’t. There’s a huge difference there.
Both Chrome and Safari use Webkit (well, technically webkit includes the javascript engine too – Google has their own one called V8) – the issue is that Chrome probably was developed with a branch that differs significantly from the Safari which Apple uses themselves. The build of the webkit which is used by Safari is different from Chrome which means there will be differences.
You are right that Safari itself isn’t opensource, just the core (webkit) but it ignores the fact, like I said, that different builds combined with branches/forks and emerging later on result in different outcomes in the final product.
Edited 2009-03-21 01:35 UTC
What’s the difference with what a salaried security researcher does? The negotiation up front? I’ll guarantee you this guy is making less because he’s doing it under his terms, working his own hours. He’s not any more black hat than Microsoft that sits on known vunerabilities for more than 6 months. Also the fact that he knows something doesn’t oblige him to do a damn thing.
“I have a new campaign. It’s called NO MORE FREE BUGS.” “What’s the ballpark value of that Safari bug? It was probably more than that $5,000 prize I won.”
Meaning he probably used to do this for free, nobody gave him a job or money. (read that to mean greedy Apple) Now he has a nice resume, industry recognition, and some money etc. I could spend my time walking around making sure old people get across the street for free. Instead I put food on the table. Are you evil because you know how to do something good, but don’t? Ask yourself again next time you fire up Half Life instead of inviting homeless people into your house. He didn’t sell to criminals! I believe Mozilla has a 500$ bounty on bugs. MS and Apple could easily put a 5000$ bounty on exploitable bugs. Put your hate where it belongs.
In the interview, Miller stated that the underlying OS had as much (if not more) to do with enabling the exploit as the browser itself.
As much as I’m a fan of the reciprocity principle, I don’t think it applies in this case. Unless Apple has released the full source for OS X and I managed to miss it.
Allright, so by this logic if you find a fatal flaw in, say, a car from Ford the right and responsible thing to do (since Ford’s designs arent “open source”) would be to sit on it for an undetermined abount of time until you’ve find a way to trigger it. once you’ve done that you do NOT tell the public what the problem is but instead you try to “extort” money from Ford in exchange for not letting anyone know.
Yes, that’s surely a society I’d love to live in.
Get this straight, it has NOTHING to do with if Apple’s product is open or not, it’s about the risk the consumers and the general public is exposed to.
Well put, completely agree.
Why not blame FORD executives for refusing to buy the information about defective cars, thereby exposing their customers to the risk?
I’m with Miller on this one, to some extent. Selling the information to criminals would be wrong, but I don’t think anyone should work for free for closed-source, IP-paranoid company who boasts making a highly secure and usable operating system. In fact, I’d say it would be extremely shortsighted to help these companies for free instead of contributing to improve FOSS alternatives. Remember the exploit is not just about Webkit, not even about Safari. The whole OS matters for the exploit, and OSX is not open source.
Except that, for one reason or other, they don’t know? And even if they did, who cares who’s to blame? Wouldn’t it be more important to save lives than play petty blame games? I presume you would gladly let people suffer and die just to point the finger at the execs?
The blame can be assessed at a later time, it won’t go away just because you expose the problem. If you keep the problem secret and sell it to them silently there sure as hell won’t be any blame dished out.
Good job missing the point again. It’s not about who is closed source and evil or has brown pants or whatever. It’s about behaving responsibly and not leaving the general public exposed to danger.
He sat on the bug for a year. FOR A YEAR. Two wrongs does not make a right.
Yes, because all software must be FOSS. It magically makes everything ok. Blah blah blah.
Who cares if it’s not open source? That’s not the point. The point is to not expose the unknowing consumer to risks.
In the example they do know someone claims there’s a problem, and they refuse to buy the details. It’s easy to go like “would you let people suffer and die…?”. You might as well claim that doctors should work for free, and not just doctors, but (more to the point) engineers and anyone whose work may somehow save lives or reduce human suffering.
Look, the bottom line here is that desktop operating systems nowadays are extremely vulnerable to malware, and should never be relied upon for any kind of life-sensitive use, unless complete isolation is guaranteed. If a cobalt-60 unit is hooked to a Mac where medical students surf through porn sites, and something bad happens because of that, the last person I would blame is the guy who failed to disclose a Mac vulnerability for free.
Congratulations, you once again miss the point. I’m baffled by the egoism at display in this thread.
It’s not about working for free, it’s about not withholding important information.
To use your doctor comparison it would be like, say, a research doctor working for company X discovered a serious, perhaps fatal, flaw in a drug manufactured and sold by company Y. Now, company Y may or may not be aware of this flaw and certainly they should have a QA process that had found it. Maybe it was a mistake, maybe someone turned a blind eye. Now, this doctor also knows that there’s a big medical conference in a year from now and it would be a boost for his career and the company if he could show off his finding at that conference. What you, and many others here, are suggesting is that it is perfectly acceptable for this doctor to withhold this crucial information from the public and the authorities simply because he wants to further himself and the company and make a buck.
I would hope that it was obvious how callous and selfish this line of reasoning is.
Any comparison in this case that involves people dying is completely over-thetop-. We’re talking a bloody software bug here, that’s not a life-or-death matter.
I know the internet has a tendency to bring out everyone’s inner drama queen but this is just over the top.
I guess you didn’t notice that we are comparing the actions, not the results. Ethics doesn’t go out the windows just because the results aren’t life threatening.
Replace “death” with injury or whatever.
Oh the irony.
Edited 2009-03-21 17:17 UTC
It’s not like you just stumble upon software vulnerabilities as you would see a crack in a bridge, unless you are a developer of the particular project where the vulnerability is found. It’s useful work, and a very difficult one at that. Disclosing it for free is working for free.
That’s killing the messenger. If, instead of being glad that someone found out about a problem, he’s immediately accused of being so “callous and selfish” to want to make a buck out of it, less people will bother and less problems will be found in time.
In your example, maybe the research doctor would do the “right thing”, disclose the information as soon as possible and, let’s assume, earn no money or fame. But then he would decide that trying to find out flaws in company X products does not pay, and he would find something else to do.
What do medical researchers do in practice? Well, they are rather callous and selfish by your standards. They don’t go out screaming as soon as they suspect some pharmaceutical drug may be harmful. They take the time to gather information, double-check their results, write a nice article and then find a suitable medical journal to publish those results. The journal’s reputation matters more than the money they get for the article, but the incentive is ultimately economic.
My point is, if medical researchers were seriously expected to behave, as you say they should, with no regard for their personal benefit when human lives are at stake (as usual), there would be preciously few of them.
I see you are trying a reductio ad absurdum, but beware, you are getting something of a slippery slope fallacy. Your argument goes like “if you find it morally acceptable for someone to let other people suffer some minor harm to their property instead of warning them, them you also find it morally acceptable for someone to let other people die when he could save them”. Of course, it doesn’t follow. On the other hand, as it turns out, I’m not morally outraged that someone whose job is such that life-and-death decisions are his bread and butter will let some people die from time to time, usually in a non-obvious way. It’s a hard thing to say, but the example itself is hard to begin with. You smuggled the topic of death and suffering into a discussion about browsers and malware, and this move rather clouds than clarifies the arguments.
I am sorry to jump at the tail end of this… In the interview I just posted it sounds like he basicly did it for free: the payoff wasn’t very big in itself to justify the amount of time that went into pulling it off. However, the economic factor is there as well. His company is hired to find these types of issue, so if anything it builds up a good rep for his company.
The exploit Miller used last year was in the open-source WebKit part of Safari. (In fact, it was in a third-party library used by WebKit, and not a bug in Apple’s code as such.) It’s likely, though hardly guaranteed, that the bug he used this year is also in WebKit, since he’s said before that he discovered it at the same time. (By the way, he found the bug by reading source code. Pretty cool, huh?)
Since Chrome uses all the same WebKit code as Safari, it’s likely that both of these bugs are (or were) present in Chrome. The exploits would still be very different, though: The initial bug will get you through the front door, but it won’t lead you to the self-destruct button.
It’s true that Safari’s interface is closed-source, but it’s also true that fixing a WebKit bug would benefit the open source community, because that’s public code used by a number of browsers.
According to rumors on some other site I read the exploit wasn’t in WebKit per se, but in a third party (open source) library used by the javascript engine. The real kicker, according to the same post, was the the bug Miller exploited has already been found and fixed upstream, but Apple is using an old version of that library that still has the bug.
Of course the only people who actually know what happened are under NDA, so take this with a grain of salt.
And this is yet another clue for the retards who claim OS X is so f–king secure, why on earth do they put a bunch of security fixes in “Security fix q1 2009” for instance? If they was serious about fixing the issues they would update/patch the issue immediately and release an update, but they don’t. So you have plenty of unpatched stuff until they decide to release their big patch.
It’s about the users. Why should the users be left vulnerable to a known exploit just because Apple’s business model isn’t the same as Red Hat’s? Do you extend the same towards Microsoft? It’s ok for Microsoft’s poor quality control to cause loss among the user base because they don’t follow a FOSS business strategy?
Please..
So, what code are you referring to? Do you know the exploit? Also, if said code was open, who now is to blame?
Selling bugs/exploits for money is pretty low… He is almost in the same league with virus writers…
I don’t think he intends to sell to some criminal organization. Which I am sure he would have no problems doing – for more money too. And selling exploits to the makers of the software – what’s wrong with that? He spends a lot of time and efforts to find these exploits. Why should a software company, that makes a lot of money from that software be entitled to get the results of his hard labour for free? That’s just like saying that getting paid to develop software is low.
Nothing. But it needs to be done exactly the opposite way of what he’s doing. He should have contacted Apple with the proposition to search for exploitable bugs at whatever terms he has (flat fee, per issue fee, whatever). If they had refused – move on to the next company. What he’s doing now is surprisingly similar to extortion. “Boy, Apple, you have a mighty fine browser there. It’d be a shame if something bad happened to it. Care to give me a token of appreciation?”
Let me get this straight, you are implying we all should take our hours of work and donate it for free for the betterment of a corporation? (Apple in this case) You surely can not believe that Charlie’s long hours of work are worth nothing and that he should just donate his time to Apple so they can make billions off of his work.
Charlie deserves to get paid for his work and I stand by his decision to offer no more free bugs. Apple doesn’t give OSX away for free, so why should Charlie donate his work to Apple?
Edited 2009-03-20 17:16 UTC
No
Let put it in another way: If somebody is spending a significant numbers of hours to find a way into your home, without you asking him to do so, means it is ok for him to sell that information and make money from it?
BTW, people that make phishing sites also spent hours of work; does that makes it ok for them to make money from them?
Finding security holes (especially without being hired by the target party) is a gray area because the usage of the found breaches totally depends on the moral of the person.
Greed for money usually leads to questionable moral choices.
Exactly the moral choices differentiate between a white hat and a black hat.
Nobody is doing that though, and regardless of what Mr. Miller does, there will be people spending hours breaking into Apple and Microsoft. If I knew for a fact that people were spending hours trying to figure out how to break into my home, I would happily pay a grey hat to preemptively find those bugs. And I completely understand if a grey hat didn’t want to give that info to me for free.
Mr. Miller works for a security company. Obviously they are not getting paid by Apple for their bugs. They probably are paid by banks etc. So Mr. Miller does some Apple research to let the banks know what their risks are for using Apple software. It’s like working as a plumber. I don’t see plumbers going around notifying us of potential leaks in our houses that could cause damage.
Hell I wish I could be so lucky as to have an electrician come over, audit my house, demonstrate a fire hazard, then ask for $ to fix it. If he can’t he walks away broke. Does anyone go around inspecting car brakes for free?
You are comparing apples and oranges. (literally when talking about Apple ) I don’t make billions of dollars from my home by claiming it to be the most secure house and almost all houses are as easy to break into as knocking out a window. Apple makes billions promoting OSX as the most secure OS and yet they have failed to lock it down properly. They need to hire guys like Charlie if they are to actually ever fully secure the OS and browser.
Charlie finds a exploit in a proprietary OS and browser after many hours of careful research. He is not obligated to give Apple that info for free when Apple sells OSX for $129 each copy to millions of users world wide. Charlie works for a legit security company and he used this exploit to display his talents so that in the future, Apple would give his company business to find exploits. There is nothing fishy going on here and there is no selling to the highest bidder. It was advertisement pure and simple, yet you keep reading his comments on the subject that he implied that his exploits were for sale to the highest bidder. He never said that nor implied it whatsoever.
True.
However, don’t forget this “finding security holes in others products for money” is called a gray area for a reason.
I can understand how it’s not easy work. You get a lab, you get the product, you fuzz it along with what ever other methods you use. That all takes time. That takes long periods between pay cheques if your working purly contract/bounty. Charlie absolutely deserves to get paid. He should have presented the bug and reached a reasonable agreement for compensation a year ago though. It’s not that he shouldn’t be paid, it’s that intentionally leaving the users who can’t fix there own systems open to damages becomes very questionable.
Personally, I’d love to see the user base turn around on MS and Apple demanding higher product quality. I’d much rather see a product already designed better trump both those retail items. Until either of those outcomes, we need all we can do to protect ourselves and clients.
No, I think everyone agrees that we should be paid for our talents. It is not the matter of getting paid, it is how you get paid. Charlie’s pompous attitude and extortion like mentality is pretty low.
What if paramedics worked like this? “Sorry sir, I can stop that bleeding gash in your head if you pay my fee.” NO MORE FREE FIRST AID!
If Charlie is so smart, he would take his talents to a proper company or contract with companies to find vulnerabilities. Not holding on to the crap for a year to win a prize and try to extort the company…
I fail to see where in that ZDnet interview where Charlie had a pompous attitude or an extortion type mentality. He very clearly says that Apple hires and pay people to find those exploits and that the work he does has market value. That is a statement of fact so I just don’t see how you find that extorting or pompous?
Paramedics are a bad example because they may patch you up without charging up front, but they are damn quick to bill you $400 to $1000 just to put a bandage on you. You still are obligated to pay them and if you refuse they can sue you and garnish your wages. FIRST AID IS NOT FREE!
On the last part, did you even bother reading the interview before commenting? I am directly quoting Charlie now.
http://blogs.zdnet.com/security/?p=2941
According to that post, Charlie works for a legitimate security firm and held on to the bug to display his talents and to promote his employer to gain business from companies like Apple. In fact, according to Charlie he lost money on this contest, not gained. That makes your entire criticism of what he did a moot point.
Edited 2009-03-20 22:07 UTC
So your doctor comes to your house and give you free medical exams? If he doesn’t he’s clearly evil. Letting you die from a preventable disease like that. Terrible!
I wonder if this won’t induce sooner or later a DMCA violation lawsuit against him.
Exploits on OSX “just work”
That is precious.
Is this guy on Crack?
Address Space Randomization ain’t the Panacea this guy makes it out to be:
http://crypto.stanford.edu/~nagendra/papers/asrandom.ps
Address-space randomization is a technique used to fortify systems against buffer overflow
attacks. The idea is to introduce artiï¬cial diversity by randomizing the memory location of
certain system components. This mechanism is available for both Linux (via PaX ASLR) and
OpenBSD. We study the effectiveness of address-space randomization and ï¬nd that its utility on
32-bit architectures is limited by the number of bits available for address randomization. In par-
ticular, we demonstrate a derandomization attack that will convert any standard buffer-overflow
exploit into an exploit that works against systems protected by address-space randomization.
The resulting exploit is as effective as the original, albeit somewhat slower: on average 216 sec-
onds to compromise Apache running on a Linux PaX ASLR system. The attack does not require
running code on the stack.
We also explore various ways of strengthening address-space randomization and point out
weaknesses in each. Surprisingly, increasing the frequency of re-randomizations adds at most
1 bit of security. Furthermore, compile-time randomization appears to be more effective than
runtime randomization. We conclude that, on 32-bit architectures, the only beneï¬t of PaX-
like address-space randomization is a small slowdown in worm propagation speed. The cost of
randomization is extra complexity in system support.
———–
Vista has it’s problems with ASLR as well:
In a nutshell, Whitehouse found that Microsoft’s implementation of ASLR isn’t 100 percent effective against automated malware attacks that rely on predicting the memory layouts of loaded programs.
Our research also shows that applications that leverage the Microsoft HeapAlloc() function are not afforded the same level of protection as those that leverage the ANSI C heap allocation API malloc(). As a result, third-party software that explicitly uses Microsoft’s API is potentially more vulnerable to exploitation than software that does not. Also apparent is that using CreateHeap() followed by HeapAlloc() improves the entropy slightly over using malloc() alone. Finally, results show fewer consecutive duplicates than expected in the PEB randomization. This result adds to the evidence that the source of entropy used within ASLR is poorly used.
————-
One loud mouth, spending a Year to crack a Mac,
does nothing about the Real World CornFlicker Infection.
————-
In the Real World your still better off with a Mac, and using VMWare Fusion when you need to run Windows. In Mac Preferences, Security, Firewall, turn on “Allow Only Essential Services”, and don’t browse this Crackers web site, maybe install Chrome.
—-
But, if you have to go Vista – go 64 bit, for security, except there won’t be the drivers you need.
I think he’s right to seek money. Exploiting bugs isn’t such an easy task. You say it screws people over when there are bugs, but hell, shouldn’t the company you bought that system from find and fix them? That’s what he’s saying, he is doing their job, life is not flowers and trees and birds everywhere, he should get paid. At least he’s taking a “right” path in selling his services to the company instead of real bad guys…
And this was a contest, contest have prizes, he won, grats?
So the anti-exploit features are not bad, but do not matter a lot. If there is a venerability, it WILL be exploited however much effort need to pay. If it is harder to be exploited, the bug/exploit price will be higher and attracting more hacker. It is like a ostrich to believe there is venerability but nobody knows it because of obscurity.
I don’t say providing those feature is bad, but they do little matter. Go to fix security hole and provide update as soon as possible. Make your applications running as least privilege.
Euh, did you read the same article as everyone else?
Specifically he said that once you use some OS-side security measure, an exploitable bug in an app becomes difficult to exploit, exponentially more so the more measures there are.
So, no, you are wrong and Apple IS wrong.
Make exploit hard, yes. Exponentially? I don’t think so. After Windows adopted some anti-exploit features, the exploiting become not so straightforward and not so handy. But, at last there is some programmatic way to automate the exploiting procedure as long as the anti-exploit features themselves are program. So it is one-shot effort to break the anti-exploit feature, not exponential. By saying anti-exploit is not bad, it is enough to make me NOT WRONG. It just doesn’t matter.
It’s pretty costly to develop an exploit against a Vista flaw. From Immunity Inc:
http://www.immunitysec.com/downloads/ApologyofOdays.pdf
Page 37: From Bug to Reliable Exploit on Win2k – ~12 days
Page 38: SP2/2k3 – ~20 days
Page 39: Vista – ~40 days
If it takes that amount of time for an expert researcher who is known in the ‘grey’ community for coming up with exploits for difficult areas, then chances are good that the average pre-packaged vulnerability will be quite expensive and a lot of potentially purchasers will become discouraged.
Also if the learning curve for exploit writing is steep enough maybe people will stop looking so hard (who’s going to spend that much of their life looking for something when few people ever succeed?).
No, they won’t stop before Vista. As Miller mentioned, it is simply economic. When it is more difficult to do, it has a better price on the market attracting more people to do it. And those 40 days comparing 20 days, the extra 20 days, means barely little. That extra time is not given to Microsoft to provide the patch, because the attacker won’t report the bug to Microsoft when he/she starts exploiting it.
On the other hand, the anti-exploit actually increase the maintenance cost of a system. The core dump information will be messed and debug a crash becomes harder, too. Then the debugger must become more complex as well as the debugger itself becomes more buggy. And once a debugger is mature, its algorithm and implementation will be shared with a hacker to work around the anti-exploit feature.
I think you have a misunderstanding here. Anti-exploit technologies usually aim to make the program crash more readily when it is exposed to malicious data. If the crash happens closer to the point of failure, it becomes easier to understand the bug and to debug problems. None of the mitigation techniques we use increase the obfuscation of the code.
I think you misunderstand the “obscurity” in the “security is not obtained by obscurity”. Here, obscurity has nothing to do with the obscurity with the source code and, in some case, even the executable code. Here “obscurity” means lack transparency and straightforwardness.
I don’t know what you mean by “crash more readily”. Anti-exploit features do not prevent the crash, so it does not increase the stability. Either, it does not prevent a hacker from doing things malicious, but only slow him/her down. But as I said, time does not as much matter in exploiting as other attack-defense game. That’s why I am skeptical about the anti-exploit.
In fact, in the exploiting world, there is a certain sort of bug which is by nature anti-exploiting. That’s the heap corruption. Unlike stack, heap is dynamic, its location is almost random. So by natural a randomization feature is not too much obscure than a heap corruption. But even a heap corruption is attackable and there are a number of techniques to do so.
As long as Apple finds profitable to advertise OS X as a really secure system and people are still willing to believe the myth, Apple doesn’t have much incentive to make OS X live up to the claims. After all, security through obscurity has worked pretty well for them so far.
I just cant understand why, readers of this article don’t get it….. Miller says “The things that Windows do to make it harder [for an exploit to work], Macs don’t do,” Miller says, “Hacking into Macs is so much easier. You don’t have to jump through hoops and deal with all the anti-exploit mitigations you’d find in Windows.”
FACTS:
I own a Mac, Umbuntu and Windoze box…..
Out of the three, my mac is the easiest to break into.
Anyone sitting on a Mac like me, your system is vunerable.
This is not life or death (Ford example) situation.
He is show casing his talent for hire.
If Apple is serious about finding bugs they should pay him for his information.
Chrome is harder to crack
Beliefs and facts are very different…. Don’t get lost in ability to think critically because of your love for a stupid operating system or technology. Apple is in business to make money or take your money. When the warranty goes on your Mac don’t expect Apple to fix it for free, and don’t expect someone out there in cyberspace to help Apple for FREE.
When you talk about a market for bugs and exploits, the detail is that the ONLY market for these is the criminal market. Period.
If there were no criminal element to exploit these defects, for criminal purposes, then if you went to the developer to sell the bug and tried to sell the work back to them, there would be no urgency to solve the problem.
Because the fact is, these bugs do not effect the overall user experience of the software. Rather they affect the security of the software and underlying system. Because there is a criminal element out there willing to leverage these exploits, the exposure of the exploits pose a real threat to the community and userbase at large.
So, that brings us to the value of these exploits. As a purveyor of the exploit information, there is only one legitimate market for it, one, perhaps, semi-legitimate, and an illegitimate market.
The only legitimate market is the vendor, so they can repair the defect, which arguably is the only “good” use of the information.
The “semi-legitimate” market may be State agencies, such as Intelligence or Law Enforcement agencies that might enjoy leveraging such flaws in order to further their covert operations. Obviously this market can be viewed through tinted glasses as to whether these are a “good” or “bad” use of this information.
Finally, the illegitimate market, which is the criminal one. This shouldn’t even be a consideration, but obviously it is.
However, only by holding the existence and potential release of the information to the criminal market does the information offer any “real” value. The “IE bug is worth 10 times the Safari bug” is indicative of this, because the ramifications of bug “getting in to the wrong hands” is so much more dire.
If releasing it to the criminal market is NOT an option, then it’s back to “security through obscurity”. Save, now the criminal market may be aware that SOME flaw exists, while not knowing what it is, or perhaps not even having the expertise to exploit it.
However, it can be argued that the more skilled the investigator needs to be, and the more difficult it is to discover and leverage the exploit, the more “obscurity” there is to the bug. And it’s arguable that these “really hard” bugs found by “really talented” people are less likely to be exploited due to their difficulty. That can actually LOWER the “value” of the knowledge, since the value is how likely the exploit will be used “in the wild”.
It’s all about risk and mitigation. While there is the search for perfection, and completely bug free and safe software, there’s also the basic economics of risk/reward and return on value.
The more obscure, and the more difficult an exploit is, the less value there is to the vendor because of the lower risk of actual exploit — assuming the knowledge remains secret with no threat of it being exposed to the criminal market.
But if you are using the potential of the criminal market getting the information to inflate the “price”, that’s effectively blackmail. Because the criminal market is what truly values these exploits, as they profit the most from them.
So, it really comes down to the person with the information and their character as to how they value the exploit.
well at least i hope that they are and that’s the impression which i got.
leopard broughts lots of new security features like sandboxing, mandatory access controls, address space randomization, application signing and execution protection:
http://images.apple.com/macosx/pdf/MacOSX_Leopard_Security_TB.pdf
but it seems like they only laid some of the ground work and didn’t really implement it:
http://www.laconicsecurity.com/aslr-leopard-versus-vista.html
still way to go, but they not just sitting on their lazy ass either.
Edited 2009-03-20 17:48 UTC
The problem is that browsers in concert with javascript basically allow arbitrary code execution on your machine by potentially anyone on the planet. Call me skeptical, but making such a thing secure _and_ convenient at the same time seems like an intractable problem, and no amount of indirection is going to change that.
Javascript is used as a dom scripting language though, so its not arbitrary code. To get arbitrary code to run, it is embed and object tags, plus some kind of browser bug to get it to execute the plugin outside of any sort of sandbox.
I should have said arbitrary input, but I don’t think you can say that javascript is strictly restricted to DOM manipulation either.
The point stands; the end result of all of this is endless turd polishing. We start with a turd; we end with a smoother turd, but it’s still a turd nevertheless.
Good thing he used his bug this year. Randomization and new security designs in 10.6 eliminate this “flaw.”
Wow. This article really made me think, I’m not very technical, though I am learning more and more (thanks to OSNEWS). Just a typical end-user. I always assumed my Mac was secure, because, well, it’s a mac. When I’m out in the mac forums and am looking for advice on security, I always run across comments like “Mac isn’t windows, so you won’t need this or that protection..” I just assumed everything was hunky dory then, especially since I’m a Firefox guy to boot. I guess I was mistaken or complacent. Well back to the forums for more searching on how to beef up my MacBook. Here’s to hoping for a quick release of Chrome for OS X. ( Or back to openSuse:), I miss amarok (pre 2.0) anyways )
You don’t need the forums to properly secure OSX. Just make certain you are behind a hardware firewall, (most routers have this built in) install Firefox with Noscript and Adblock Plus and if you can afford it, buy a decent antimalware suite for your Mac. Then just practice safe surfing and email habits afterward and the likelihood of you being exploited is very small.
(This advice also applies to Windows with the exceptions that you need to run as a standard user, turn on DEP, ASLR and you have several free antimalware suites that are pretty good to choose from. I suggest Avira for a free one and Kapersky for a paid one.)
Edited 2009-03-20 18:41 UTC
Thanks soonerProud I will do all of those things.
Still, you probably are much more secure on the Mac than on Windows anyway. This is so simply because Macs are targeted very seldom compared to Windows machines (not the most technically stylish way to stay safe, but very effective . So, really, just making sure you are patched up and taking care in surfing iffy sites tends to be plenty.
That is very bad advise to rely on security by obscurity and patching your machines. Good security practices require a layered approach on all OS’s. The problem with just relying on those two things is Apple has been notorious for being slow to patch flaws and the game could change at any time and there are signs that is happening now.
With Apple approaching near 10% in the US in market share and the popularity of the iPhone in North America and parts of Europe and Asia, OSX is starting to be a lot less obscure. Trojans now exist for the Mac and gray hackers are now demonstrating how easy it is to hack a Mac. Lets not forget that mobile OSX is real popular to crack and unlock. With all the media attention to the ease of exploiting OSX, cyber criminals now have a new target for easy pickings to obtain private and banking info. Mac owners tend to be well off financially compared to most PC counterparts and are much more lackadaisical about security in general. People that rely on security by obscurity are about to get a huge wake up call when thousands to millions of Mac owners have their personal information and identities stolen.
To sum this post up, the layered approach I suggested earlier is the only way to secure any PC connected to the net, regardless of OS. None of the suggestions I gave earlier will interfere with the end user experience and may actually enhance in the larger scheme of things. Buying a anti-malware suite should not be an issue to some one that could scrape up the money to buy the Mac to begin with, especially when the risk to your identity and bank account are at stake.
A bit late but still; The reality of the matter is that Mac is by far safer than Windows though, and I think it doesn’t hurt to take that away from this. Not that one should feel overly safe certainly, but there are some rewards in a more heterogenuous technology landscape that should not be ignored.
Certainly no Mac user should feel the need to switch away from their platform on the count of security, it is not really thanks to Apple (except in that they failed to command a very large portion of the market) but it is still a pretty safe place to be.
I am also for that very reason a bit vary of the suggestion to use Firefox actually, since I would prefer that Safari/webkit grabbed a bit more marketshare to even things out. Firefox is becoming a big target, with both the downsides of pages catering to IE/Firefox and the obvious cracker attention.
But I wont say that you are *incorrect* in your suggestions, just that some notes on the realities of the dangers may be missing
NoScript should mitigate most of the security issues with Firefox, when used properly and is easier to use than blacklisting sites and adding exceptions in Safari. I wouldn’t use either browser with out more fine grained control of scripting on OSX.
Its good to see users taking an interest in making sure their computers are secure – lord knows if more users were like you there wouldn’t be outbreaks of worms left right and centre.
As someone mentioned previous, a good firewall which most routers have already (I have a ASUS WL-500W which is brilliant – my cable modem is hooked up to that and in turn I am hooked up to the router), keep your software up to date (go to macupdate.com) and don’t visit dodgy sites. Its like walking home late at night; do you walk down the safe areas or do you walk through the dodgy ally ways but think because you have a baseball bat (security software) it will protect you from a thug?
…pissing contests. They’re hilarious. But instead of splashing acid all over the walls lets look at some FACTS – something many journalists these days do their best to avoid.
– Despite comments by Miller (and I’ll get to him in a minute) and people like him OSX has not yet been broken in any meaningful way that didn’t involve a user action. Yes users are the weakest link, and some of them are just above green slime on the intelligence meter – although thankfully not many of those ones use Macs (here come the flames!) – but that doesn’t change this FACT.
– Microsoft are doing a MUCH MUCH better job at security on Windows than they have previously. XP SP2+ and Vista are quite secure if configured properly. The fact that Microsoft has had to build a lot of this stuff in is testimony to the fact that Windows was HORRIBLE at security for so long. The fundamental foundation of OSX and similar OSes is more secure. Does this mean Apple shouldn’t be adding the additional levels of security? Of course not, but see #1 above for the reason why it hasn’t had to be a priority for them.
– At no point do Miller or any of these hackers disclose exactly how long it took them to find the exploit then build some code to take advantage of it. Regardless of what spin is put on this by interviews or commentaries the FACT remains that these guys worked on these things for some time, it didn’t just happen in a few seconds, minutes or even hours.
– The fact the Chrome performed so well is maybe an indication that Google have the model right – maybe this is a model that needs to extend beyond just web browsers?
– Apple and Microsoft (and others too) currently pay people a LOT of money to look for these exploits – well into six figure incomes in many cases. Miller is out to promote himself and his abilities so that he can sell himself off to the highest bidder – nothing more, nothing less. Yes, he is good at this stuff, but that doesn’t make him any less a salesperson just trying to get the most for his service. Unfortunately he has the assistance of at least one “journalist” in this pursuit. Personally (and even Miller said it, sort of) I think that this Nils guy / gal is worth more – exploits for multiple browsers on multiple platforms – very nice work, and I suspect someone will want to pay well for that ability. At least Nils wanted to remain anonymous so its clear he / she isn’t JUST in it for the publicity. Unlike some, and the journalists(?) who give them the “air” time…
Edited 2009-03-21 00:30 UTC
Ah yes, it’s the fault of the journalists that Apple was the first to fall, and that a talented and very experienced hacker says the platform isn’t very safe compared to Windows. Can’t counter with arguments? Shoot the messenger.
Yup, totally our fault. I’m sorry we have had the audacity to report on something negative about Apple. I hope His Steveness will be able to forgive me, because if the fantasy world you live in turns out to be the real thing, he’ll be the one up there.
They said they hack Mac OS. Ok, but which release of Mac OS ?
Because Leopard should have sandbox and randomization of memory (according to Apple).
I can’t find the information anywhere
The contest rules are to use the most recent released version of the OS (In this case it is leopard) fully patched. There were also Apple representatives present during the contest.
Edited 2009-03-21 18:03 UTC
There is a difference between having the technology in the operating system and the software actually using it. In the case of Safari, it exists in the operating system but Apple isn’t taking advantage of it.
Something just struck me; he didn’t crack it in 10 seconds. When it says ‘he cracked it in 10 seconds’ it sounds like he sat down and within 10 seconds was able to find a vulnerability then and there and then exploit it. The reality is that he could have been analysing Safari for 6months before this even happened; so claiming it has been cracked in 10 seconds is misrepresenting how he actually came up with the exploit in the first place.
It’s noteworthy that Apple proganda sites like Appleinsider and Macrumours haven’t bothered to report Miller’s comments to their audiences yet.
I read a number of comments, and found when talking about security, people are as blind to say one OS is secure as to say it is insecure.
One simply fact people ignore is that the hacker DID find bugs on ALL OSs, and CAN exploit the bugs on ALL OSs. When a hacker comes across an anti-exploit feature second time, it may take much less time to work it around than he/she met it the first time. And when the exploiting is done, the real attack takes the same 10 second for all OSs.
Working around an anti-exploit feature is a technique need to learn with effort, but once you master it, it does not necessarily to take the same effort each time of exploiting or each time of the attack.
When a hacker starts exploiting a bug, it is usually the whole world, including the OS vendor, does not know the bug existing. So a hacker exploiting OS bug, unlike decryption (because a password is valid only in a short period), unlike a thief (because the host will return soon), has plenty time. While to set up obstruction is an effective way in attack-defense game, it’s very different in the bug-exploiting world, where time does much less matter.
I always think the anti-exploit features like randomization have their down-side, too. It complicates the debugger and the crash image. Well, if you think a debugger has way to deal with it, a hacker has a way, too, and the way is possible can be automated.
Leopard has library randomization, and it’s alone the path to full-randomization. But I won’t simply be sad because of its lack of this feature or cheer for it has the feature. Be calm for such thing.
Here is a much better interview. A lot of interesting stuff in there e.g. he recommends Macs to regular users because they are safe even though he thinks they are less secure.
http://www.tomshardware.com/reviews/pwn2own-mac-hack,2254.html
“Charlie: Yes, I took down the Mac in under a minute each time. However, this doesn’t show the fact that I spent many days doing research and writing the exploit before the day of the competition. It only looks Hollywood because you don’t see the hard work in the preparation. If you set me down in front of an application I’ve never seen before and told me I have 2 minutes to hack it, as is often the case in movies, I’d have no more luck than your grandma at accomplishing it. Well, maybe a little more of a chance, but not much!”