This lengthy and highly technical primer provides a gentle yet thorough introduction to elliptical key cryptography (ECC), said to be ideal for resource-constrained systems because it provides more security-per-bit than other types of asymmetric cryptography. The paper is from Certicom, which markets Security Builder toolkits targeting various popular desktop, server, and embedded OSes.
Haven’t read the article yet but shouldn’t that be elliptical curve cryptography?
ECC is public key cryptography, not asymmetric key
“Public key” and “asymmetric” are synonyms. “Symmetric” algorithms rely on a secret key shared between principals, i.e they both have the same key, hence “symmetric”
“Asymmetric” algorithms rely on a pair of keys, a public key held by one principal and a private key held by another, i.e. they both have different keys, hence “asymmetric”
It’s actually a pretty good article, it’s well worth reading. If people are looking for an introduction to this stuff they should check out this article
http://developer.netscape.com/docs/manuals/security/pkin/contents.h…
It would be nice to have a pdf version of the article. I’m sure such a version wouldn’t require 20 pages as shown by Mozilla print preview.
A gentle, yet thorough article. I am missing something though. The author makes a good point why one could prefer ECC over Diffie Hellman or RSA methods: when key sizes increase, the effort one has to do to break the encryption, increases faster for ECC. So when computing power increases, the amount by which you have to increase key sizes to keep up, is smaller than the other methods. But: he also mentions AES (Advanced Encryption Standard), and fails to say how AES stacks up to ECC in this respect.
AFAIK, the AES (Rijndael) is really advanced in that it offers very strong encryption vs. relative easy processing (suitable for smart cards and such). It’s not for nothing that this method was selected for protection of US government data.
Public key crypto is based on ‘hard problems’. Problems that can be computed, but only very difficult. It’s sort of weird that only a couple of such ‘hard problems’ form the basis of so many encryption methods. A common hard problem is finding prime factors of very large (integer) numbers. I suspect that if someone would discover a revolutionary, fast way to solve it, half the world’s crypto systems would be rendered useless. Another ‘hard problem’ is that of the traveling salesman, or the ‘knappsack problem’ (optimum packing of different sized items in a fixed space).
Pretty much any symmetric cipher will be faster than an asymmetric one, that’s why public key algorithms are almost never used for encryption of actual files.
The problem with symmetric encryption, though, is that you’ve got to distribute a secret key to everyone with whom you want an exchange. Distributing the keys in a secure manner can be quite difficult, especially when it’s for once offs like HTTPS transmissions.
As a result, quite often what a lot of systems like HTTPS will do is this:
1. Create a secret key randomly, and use a symmetric cipher like AES to encrypt the message
2. Encrypt the secret key (which is comparatively small) with the recipient’s public key. Because what’s being encrypted is small, the speed of the publik key cipher isn’t an issue
3. Deliver both the encrypted message and the encrypted key to the recipient.
The recipient will then use their private key to decrypt the secret key, and use that to decrypt the message. HTTPS (technically TLS) uses this public key system to distribute secret keys for encryption of transmission during the initial handshake.
Other areas where asymmetric ciphers are useful include verification of authentication and integrity. The principal sending a message can create a signature by creating a hash (a checksum basically) of the file and encrypting that using their PRIVATE key. When the recipient receives the message, they perform the hash/checksum again, decrypt the signature using the senders public key, and see if they match. If they do it verifies both the origin and integrity of the file. Again, because the checksum is small, the speed of the assymetric cipher used isn’t such an issue.
Just in case I lost people there, stuff encrypted using a public key can be decrypted using its private key and vice versa
I’m pretty sure the US Government standardised on RSA for its Public Key Infrastructure. It’s all about using the right tools for the right job. And assymetric ciphers are getting a lot of use these days.
And yes, as soon as someone discovers a solution to these hard problems the whole pack of cards falls down. However, while there’s a legion of mathematicians looking to make their name, that won’t happen soon, and the combination of symmetric (AES, Blowfish) and asymmetric (RSA, ElGamal, ECC) ciphers, when used properly, provides a robust security framework.
Stupid question maybe, but if ECC scales up faster than RSA, why not use it for all applications, not just ones where power consumption or computation are limited.
In other words, if a 1024 bit ECC is much more difficult to break than 1024 RSA, why not use ECC instead?
Because all the others have been thoroughly tested and most weaknesses are now known. ECC hasn’t got as much testing yet and people are afraid there may be some exploitable hole that’ll pop up in a year or two.
From what I’ve read in the article though, that’s unlikely. However crypto people tend to be ultra-conservative about these things.
I wish the paper had been more mathematically rigorous, since it claims to be a highly technical paper. No scientific references are given.
Certicom has patents on ECC; that’s why they want you to use it and why most people avoid it. RSA is trusted and free.
This article contains a lot of good information, but I wish the writer had chosen a single target audience. The author repeats the really simple “sound bite” sales gimmicks many times, and then suddenly throws in terms you don’t know about until second-year cryptography courses without explaining them. Then it’s back to simple terms and overexplanation again, and then back to unexplained detail.
Also, is it just me, or is there an odd definition of “vertical line” at play in the appendices? Maybe my eyes are going, but in both the diagrams explaining the arithmetic of the fields, the step that says “draw a vertical line, and where it intersects the curve…” doesn’t seem to match up to the diagram.
Kev
It’s sort of weird that only a couple of such ‘hard problems’ form the basis of so many encryption methods. A common hard problem is finding prime factors of very large (integer) numbers. I suspect that if someone would discover a revolutionary, fast way to solve it, half the world’s crypto systems would be rendered useless. Another ‘hard problem’ is that of the traveling salesman, or the ‘knappsack problem’ (optimum packing of different sized items in a fixed space).
I’m not sure about prime factorisation, but all of the NP complete problems (such as many graph problems like the travelling salesman problem) ARE just one problem. This is the main problem with the most common asymmetric-key cryptography schemes: that they will all be compromised if one is found to be easily computable.
It has been suggested though that the NP complete problems are, in actual fact, NP complete, i.e. that they ARE impossible to solve efficiently.