40 minutes and physical access. That’s all russian company ElcomSoft is claiming to need in order to crack the 256-bit hardware encryption Apple uses to protect the data on iOS 4 devices. Full access to everything that’s stored inside, including “historical information such as geolocation data, browsing history, call history, text messages and emails, usernames, [passwords, and even some] data deleted by the user”, is obtained.
I think that Apple uses weak encryption schemes on purpose. Maybe they or the US govt will need the user data from time to time.
Edited 2011-05-25 15:16 UTC
Probably. If it’s DES, 256-bit will still not be enough
Good enough is Good enough, until it isn’t. Whatever Apple was doing for encryption before was Good Enough. And now it isn’t. Now there’s motivation on Apple’s part for the next level of Good Enough.
yeah! 512bit DES! :p
Old news. This was already done by both Fraunhoffer and Charlie Miller.
Really ? I honestly didn’t know about it…
Well, let’s consider this as a draw You have double-posted Ballmer’s speech, after all.
Edited 2011-05-25 19:40 UTC
Group hugs.
What to do if you really care about your data and you don’t want someone else to sneak, take a peak at your data or your traffic? Beside quiting to use devices and computers, staying offline or writing your own encryption scheme?
Entities like NSA have the capacity to decrypt some of the most complex algorithms, OSes and devices have backdoors. Even in OpenBSD was discovered a backdoor some time ago.
Everybody always alleges backdoors in major operating systems and other pieces of software, but nobody provides proof. I tend to believe that the likes of Windows and Solaris do not have backdoors built in, for four reasons:
1. Were it discovered by the public, it would seriously damage the reputation of the company making the software.
2. For the larger targets, they would be discovered by now. It’s only a matter of time.
3. Again, I haven’t seen evidence ever produced; only accusations and a general assumption that this is true.
4. A few years ago, the US Congress tried to mandate certain types of software and encryption algorithms be made with backdoors. The software industry successfully lobbied against these requirements. It is not in their interests to build holes into their operating systems and encryption algorithms.
As for OpenBSD, there were no backdoor discovered. It was only a former developer who made accusations against another former developer that he was paid to insert backdoors into the IP stack years ago. Further code review showed this to not be the case. Again, accusations without evidence.
EDIT: proofreading helps.
Edited 2011-05-25 17:08 UTC
5) ( for windows) Why require a backdoor, when the front door is wide open? Front door access, gives plausible deniability. No.. that wasn’t the NSA/CIA/FBI/DIA, it was a hacker who wrote that virus that stole all your files and smashed your centrifuges.
Windows security is better than it was, but its not as if viruses are impossible now. Still a better idea for an intelligence agency.
Bill Shooter of Bul,
“5) ( for windows) Why require a backdoor, when the front door is wide open? Front door access, gives plausible deniability. No.. that wasn’t the NSA/CIA/FBI/DIA, it was a hacker who wrote that virus that stole all your files and smashed your centrifuges.”
I don’t know why your post was downvoted…but an educated guess says it’s likely that secret agencies (or even corporate spies) are using published and unpublished vulnerabilities. It’s rather irrelevant that they’re intentional or not.
If they really want the information, they can always plant a keylogger. Or mount a highres camera where they can record people logging in.
Or, if those are infeasible, recording the sounds of keystrokes by targeting windows with distant lasers is plausible (yet another vulnerability for owners of windows).
http://berkeley.edu/news/media/releases/2005/09/14_key.shtml
The source code for Windows is available to countries that want to analyse it and use it in high security environments. So since it’s being used in countries other than US, I bet there is no backdoor.
Mmm… they won’t know if what they see is… the source code of it… until they will be able to compile it, modify it, compile it again, test the resulting compiled system, etc.
I mean, I can say that I have the secret formula of the coca-cola, and give the recipe to a country. But until they follow the recipe and try the product… they won’t know if the “recipe” was really true or not.
And what gives you the idea that the Russians, Chinese, Israelis and French didn’t do it?
“And what gives you the idea that the Russians, Chinese, Israelis and French didn’t do it?”
I don’t know if they can or cannot compile it themselves, but I somewhat doubt microsoft would allow them that privilege.
The thing is, the notion that anyone having the source knows whether windows is secure seems a little implausible.
Vulnerabilities can creep up in innocent looking code. That would always offer plausible denyability.
When a security update comes in, countries in possession of the code could indeed locate the vulnerabilities in the source, but where is the evidence that it was accidental or deliberate? It’s not like MS would label a backdoor as “NSAKey” or something.
They kind-of never had an option of not allowing something when bidding for government contracts.
I did not say “they didn’t do it”. You are talking like if I said that. I said that if someone says “hey, this is the source code of Windows for your country”, you don’t know if it’s true if you are not able to compile it, modify it, compile it again, test the resulting compiled system, etc.
You said “The source code for Windows is available to countries”. You don’t know that, until you know that they are able to compile it, etc.
Edited 2011-05-28 10:40 UTC
No there wasn’t.
Yes, there was: http://www.readwriteweb.com/enterprise/2010/12/did-the-fbi-build-ba…
No, there wasn’t. There were accusations without any proof. Can you provide solid evidence for the existance of that backdoors (like, somenone who actually found them in the code)? Or, are you just one of the thousands of users that believe every gossip they read on the internet?
Edited 2011-05-25 18:01 UTC
No there wasn’t. Seriously, a backdoor in open source code hidden in plain sight for years? I mean, not just a bug, a backdoor. Go ahead, the code is there, dig out the proof that no-one who has audited the code has found.
If you believe this stuff you might as well consider TMZ a reliable news source.
The hardware encryption was “cracked” because the key is stored on the device…
Making it not so much encryption, as obfuscation.
If the device is able to boot and operate without the user having to input a key into it, then the key for any encryption is clearly stored on the device just waiting for someone to work out how it can be obtained.
If you want encryption that actually works, you need to ensure the key is never stored with the device – and that means forcing the user to enter it every time the device boots.
Nothing is secure because you have the human factor in it.
People are idiots, they will think they’re being secure by using a complicated password then they will use that password for everything. Not smart.
Even things like LastPass, keep your stuff secure, then they get hacked and gain some info on accounts.
Happens every day, it’s how they then deal with it that’s the key issue.
Just think. At least we don’t have phones made by Sony.