



Make your software behave: Tried and true encryption  
The only path to secure software is to use established, rocksolid cryptographic algorithms
In this second of three articles on cryptography, Gary and John discuss which tried and tested encryption technologies you should consider using  and why. In our last installment, we introduced the ideas behind cryptography and explained why it is useful when developing securitycritical code. The number one lesson from last time is never to roll your own cryptographic algorithms. The best bet is to borrow liberally from the good ideas that cryptography specialists have invented and tested over the years. One of the most common decisions facing a developer when choosing a cryptographic solution is whether to use a symmetric key algorithm, like DES, or public key cryptography, like RSA or ECC. In this column, we provide a quick introduction to both kinds of algorithms and arm you with some information about common tradeoffs involved in choosing one or the other. (Remember that although we apply and analyze software that uses cryptography all the time, we're not cryptographers. When in doubt, seek more information.) Symmetric cryptography In a symmetric cipher, the same key is used both to encrypt and to decrypt a plaintext message. The message and the key are provided as input to the encryption algorithm, producing ciphertext that can safely be transmitted over an insecure medium (like, say, the Internet). On the other side, the decryption algorithm (which is necessarily closely related to the encryption algorithm) takes the ciphertext and the same secret key as its inputs and produces the original message.
A highlevel overview of symmetric algorithms is shown below in
Figure 1
. Since both parties in a symmetric cipher communication must possess the same key, and the key must remain secret, serious arrangements need to be made in order to distribute the secret key securely. If the key does not remain secret, the algorithm loses all of its power. One way to distribute a secret key is to copy it to a floppy disk, and deliver it to the person with whom you wish to communicate securely. The vulnerability inherent in this method of key distribution is the biggest disadvantage of symmetric algorithms  lose the disk, and all bets are off. DiffieHellman is one of several wellknown protocols that exist for distributing a symmetric key over an insecure medium (see Resources). However, when using these protocols, you must be aware of the requirement for good authentication. Exchanging keys with a remote server securely is certainly possible, but there may be no confirmation (unless you require it) that you are sending a key to the correct entity. Perhaps counterintuitively, the most common way keys are exchanged for symmetric ciphers is through use of public key cryptography (discussed below). Types of symmetric algorithms In the easiest to understand block cipher, each block of data is encrypted separately. This type of cipher is said to work in Electronic Code Book (ECB) mode. Be forewarned that this obvious approach presents some security risks. For example, suppose that every block of 64 bits in a data file is encrypted separately. Every time the 64bit plaintext string "security" (assume 8 bits per character) gets encrypted, it will encrypt to the exact same ciphertext string. If an attacker sees the plaintext for a sample message, and the message happens to include the word "security" perfectly aligned on a 64bit boundary, every message with "security" so aligned will immediately be apparent to an attacker. Information such as the way a specific word is encoded can help an attacker immensely, depending on the circumstances. In one attack, a bad guy can modify encrypted data without any knowledge of the key used to encrypt the information by inserting previously recorded blocks into a new message. To delve a bit more deeply, consider a money payment system where deposits are sent in a strictly encrypted blockcipher format. In our example, the first 128 bits represents the account number to deposit the money into. The rest of the message encodes the amount of the deposit, the time at which the message was sent, and perhaps some other information. If an attacker knows that the first 128 bits represents the account number, and if the attacker happens also to know that a target account number encrypts to a particular string, the attacker can modify messages in transit to divert a deposit into the target account. The attack works by replacing the real account number in its encrypted format with the attacker's own account number (also in encrypted form). Oops! Stream ciphers don't suffer from this sort of "replacement" problem precisely because of the way they are designed. All hope is not lost for block ciphers, though. Block ciphers can be modified to mitigate the risk we outlined above. There are several different strategies that avoid the problem. For example, in Cipher Block Chaining (CBC) mode, blocks are still encrypted one at a time, but the plaintext for each block is XORed with the ciphertext of the previous block before being encrypted. This is a very common approach to block ciphers. In fact, CBC mode is the default mode for many block ciphers. Another mode, called counter mode, uses an arbitrary but reproducible sequence of numbers as an additional input to the cryptographic algorithm. The particular sequence doesn't matter very much. Usually a pseudorandom sequence seeded with the clock time is more than sufficient. The counter is then mixed with the plaintext before encrypting. In any such approach, the counter cannot repeat a digit frequently, or the advantage disappears. Using CBC or counter mode in a block cipher helps mitigate the risks incurred when the same plaintext block appears in multiple places in a message (or across multiple messages) by ensuring that the corresponding ciphertext blocks will be different. In most cases, the modes we have discussed are built right into an algorithm implementation, and it's possible to choose which mode to use in your particular application. We recommend against using ECB mode, but beyond that general advice, picking a mode depends on circumstances. Each mode tends to have its own security implications. CBC presents a reasonable solution, except that attackers can add gibberish to the end of an encrypted message, or introduce it into the middle of a message. You can avoid a problem like this with two preventative measures. First, make sure you know where your messages end. Encode that information at the start of your message, or make it implicit in your protocol. Make sure the receiving end checks the length information before proceeding. Second, add some sort of checksum to the data, so that you can detect cases in which the middle of a message has been modified. (Cryptographic checksums are perfect for this purpose, and we'll discuss them in a future topic, "cryptographic hashing algorithms".) These precautions help mitigate message tampering problems.
Security of symmetric algorithms Block size can also be a factor. 64bit blocks may not be secure enough, but 128bit blocks should be more than adequate. The research community has done lots of work on developing secure symmetric ciphers. But demonstrating the security of a cipher is still an extremely hard problem. No practical cryptographic algorithm is completely secure. One perfect encryption algorithm exists, called a "onetime pad." With this algorithm, an nbit plaintext message is XORed with an nbit random number (resulting in an nbit ciphertext). In this case, if a 64bit message is intercepted, and if the key is truly random, all 264 possible bit strings are equally likely. There is no information in the ciphertext that reveals any of the structure of the plaintext, except its original length. In a onetime pad, you can never reuse the same key, or the security of the scheme goes away. Also, as with any symmetric algorithm, there is a hefty key distribution problem. Onetime pads are not considered practical enough due to the large key sizes that are necessary. Imagine trying to encrypt data over a network with the capacity for gigabits per second using a onetime pad! Then again, disk space is becoming cheaper and cheaper. Onetime pads may become more popular as bandwidth and storage prices continue to drop. The ciphertext always reveals information about the original plaintext that can be located without possessing the key. An attacker's challenge is to recognize the leaking information. An important goal of any cryptographic algorithm is to make cryptanalysis extremely difficult. Other important goals include speed and memory minimization. Unfortunately, making cryptanalysis difficult is not easy to do. It's not impossible for good cryptographers to design an algorithm resilient to all known forms of attack. Moreover, it is far more difficult to design against types of attacks that are completely unknown. Many people believe, but no one knows for sure, that the National Security Agency (NSA) has developed sophisticated attacks against general block ciphers that they have not shared with the rest of the world. Also, there is no telling what sorts of attacks against any algorithm will be discovered in the coming years. The best that cryptanalysts can do is analyze ciphers relative to known attacks, and judge them that way. When it comes to key length, 128 bits is generally considered more than adequate for messages that need to be protected for a typical human lifespan (assuming no other attacks can be mounted against the algorithm except a bruteforce attack, of course). In addition, 112 bits is considered adequate. To be safe, you may wish to consider 256 bits, which is believed to be secure enough that a computer made of all the matter in the universe computing for the entire lifetime of the universe would have an infinitesimal probability of finding a key by brute force. On the opposite end of the spectrum, 64 bits is considered too small a key for highsecurity applications. According to Schneier, in 1995 someone willing to spend $100 billion could break a 64bit key in under a minute. Reasonable computational resources can now break such a key in well under a year. And 40 bits is considered only marginally better than no security at all. Data Encryption Standard and Advanced Encryption Standard
Many modern ciphers have been patterned after DES, but few have stood up as well to cryptanalysis as DES has. The main problem with DES is the very short key length, which is completely inadequate in this day and age. It is possible to adapt DES with its short key length to be more secure, but this can't be done arbitrarily. One (bad) idea that many people try involves applying DES twice  something known as double encryption. In such a scheme, a message in encrypted once using one key, then encrypted again (ciphertext to modified ciphertext) using a second key. A very subtle form of attack turns out to render this kind of double encryption not much better than single encryption. In fact, with certain types of ciphers, multiple encryption is probably no better than single encryption (a class called closed ciphers). You should avoid closed ciphers because they are more prone to attack than most. Although double encryption isn't very effective, it turns out that triple encryption is about as effective as one might naively expect double encryption to be. For example, 56bit DES, when tripleencrypted, yields 112 bits of strength, which is believed to be more than adequate for any application. Tripleencrypted DES, or simply triple DES (often seen written as 3DES) is a popular modern symmetric block algorithm. Triple DES is not a panacea, though. One problem with triple DES is its speed, or lack thereof. Partially because of the speed issue, the U.S. Commerce Department's National Institute of Standards and Technology has initiated a competition for the Advanced Encryption Standard (AES). So far, NIST has selected five algorithms as finalists for the standard. The winner will be announced before the end of 2000. One risk of the AES competition is that all the algorithms are relatively new. No amount of scrutiny in the short lifetime of the candidate algorithms can compare to the intense scrutiny that has been placed on DES over the years. Nonetheless, some people still believe there is a slight risk that the NSA placed a hidden "trap door" in DES that makes it easy for them to attack. These people are likely to embrace a standard in which the NSA had no hand (however, the NSA has very strong ties with NIST). And, in truth, it is very likely that the AES winner will be at least as good an algorithm (for use over the next 50 years) as DES has been since its introduction, despite a relatively short lifetime. Although triple DES has some performance issues, for the time being it is a highly recommended solution. A big advantage to this algorithm is that it is free for any use. You can download several good implementations of DES from the Internet. The AES winner will also be free for any use, with at least one free reference implementation. Many of the five candidates are already free. The AES candidates are all much faster than DES for most applications.
Plenty of commercial symmetric algorithms are also available. Since these algorithms are proprietary, they tend not to be as wellanalyzed, at least publicly. But some proprietary algorithms offer excellent security, and run quite efficiently. Nonetheless, we see no overwhelming reason to recommend anything other than the standard for most applications  triple DES for now, and the AES winner once it has been announced. One problem with a symmetric key solution is the requirement that each pair of communicating agents needs a unique key. This presents a key management nightmare in a situation with lots of users. Every unique pair of communicating entities needs a unique key! As a way around this problem, some designers turn to key derivation algorithms. The idea is to use a master key to derive a unique key for each communicating pair. Most often, key derivation uses some unique user identification information (such as user serial number) to transform the master key. The inherent risk in this scheme is obvious. If the master key is compromised, all bets are off. In fact, if even one derived key can be cracked, this may provide enough information to attack the master key. Regardless of this risk, many systems rely on symmetric algorithms with key derivation to control cryptographic costs. In a related practice, designers often make use of session keys instead of using the same symmetric key for all encrypted communication between two agents. As in the derived key above, the idea is to use a master key for each communicating pair, perhaps itself derived from a global master key to derive a session key. In case the session key is compromised, the system can continue to be useful. Once again, the idea of session key derivation from a master communicatingpair key presents risks to the entire system. Public key cryptography In a public key system, a message is encrypted by the sender using the public key of the receiver. Barring a weakness in the algorithm, and assuming the algorithm is implemented properly, the only person who should be able to decrypt a message encrypted in this way is the person who possesses the associated private key. This system is analogous to a mailbox into which everyone can place mail. In this analogy, no one can easily retrieve mail from the box unless they possess the carefully guarded secret key that opens the box.
A graphical overview of public key cryptography, illustrating what happens when
Alice sends a message to Bob, is shown below in
Figure 2
. The Achilles' heel of public key algorithms is that encryption and decryption of messages tends to be incredibly slow relative to symmetric key algorithms. In general, software implementations of public key algorithms tend to be around 100 times slower than DES, and 1,000 times slower when each is implemented in hardware. For this reason, it's not practical to encrypt large messages in a timely manner using public key cryptography. Fortunately, encrypting small messages does seem to fit within acceptable bounds. As a result, people tend to mix the symmetric and public key algorithms together in practice. In such a mix, most communication is carried out with a relatively fast symmetric algorithm. But the highrisk part, exchanging the symmetric key, makes use of a public key algorithm. As we alluded to above, this is a great way to avoid the key distribution problem. It also addresses the key derivation risks cited at the end of the symmetric key section (at least for platforms with reasonable computational resources). Given a solid way to distribute keys in a reasonable time, there is little reason to turn to key derivation and take on the associated risks. Let's consider an example of a mixed approach in which Alice wishes to initiate communication with Bob. Alice and Bob will communicate mostly using a symmetric cipher such as triple DES. However, first they need to exchange a secret key to last the lifetime of their session before they can use the cipher. The secret key they need to share is called the session key. Hopefully, Alice will generate this key using a secure source of randomness, and not a poorly designed key derivation algorithm. (See our series of articles on random number generation, here on developerWorks, for more information on randomness.) To get the session key to Bob in a secure manner, Alice employs public key cryptography. First, she looks up Bob's public key, and uses it to encrypt the session key. She sends the resulting ciphertext to Bob. Bob uses his private key to decrypt the ciphertext, giving him a copy of the session key. All subsequent communication is done using triple DES and the session key (a shared secret). The RSA algorithm The security of the RSA algorithm is believed to be equivalent to the difficulty of factoring n to get p and q. Factoring large numbers is believed to be really difficult to do, although this generally accepted claim has never been proven conclusively. Nonetheless, RSA has stood up well to public scrutiny for almost twenty years, so people have gained faith in it. Many people intuitively believe that there is a small number of very large primes, and go on to reason that there are likely to be many problems with RSA's system. Why couldn't an attacker simply create a database of all possible keys? Each possible key could be tried in a brute force attack. The bad news is that this sort of attack is possible. But the good news is that there are far more prime numbers than most people may suspect. There are about 10^{151} primes of length up to 512 bits, which means there are enough primes of up to 512 bits to assign every atom in the universe 10^{74} prime numbers without ever repeating one of those primes. In RSA, security is mostly dependent on how difficult the composite of two prime numbers is to factor. Recall that in a symmetric cryptosystem we said that 256bit keys are believed to be large enough to provide security for any application through the end of time. With RSA and other public key cryptosystems, the same heuristic does not apply. In fact, it is very difficult to say what a good public key system length will be in 50 years, never mind any farther down the road. Obviously, a 256bit number represents fewer possible RSA keys than 2^{256}, since clearly not every number is prime. So, the size of the space is considerably smaller for a bruteforce attack. In fact, it turns out that factoring a 256bit number directly does not take an unreasonable amount of resources. In the end, comparing public key length to symmetric key length directly is like comparing apples and oranges. One major concern is our inability to predict what kinds of advances researchers will make in factoring technology. Many years ago, it was believed no one would ever have the resources necessary to factor a 125bit number. Now anyone willing to spend just a few months and a few million dollars can factor a 512bit number. As a good security skeptic, you should use no less than a 2048bit key for the foreseeable future. That is, use 2048 bits if your data needs to remain secure for long periods of time (such as two or more years). 1024bit keys are appropriate for other uses for the time being. Just keep in mind that a good chance exists for even 1024bit keys to be easily factored in just a few years. If you're reading this in 2005, you should check to make sure that 2048bit keys are still considered sufficient for practical uses, because they may not be! What's the drawback of using a very long key? With public key cryptography, the longer the key required, the longer it takes to generate it. While you might be well served with a huge (say, 100,000bit) key, you are really not likely to want to wait around long enough to generate the key. Currently, generating a 2048bit key takes several minutes on most machines. Even that smallish size pushes the limits an average user is willing to wait. Some nontechnical concerns are associated with RSA. Tops among these is that RSA is currently covered by a U.S. patent. Use of RSA requires a license. Thankfully, the patent situation is about to change. The patent on the RSA algorithm expires on September 20, 2000, and cannot be renewed. By the time you read this, the algorithm may be in the public domain. In the meantime, the ElGamal algorithm, which is unquestionably in the public domain, is a good alternative. There was once some doubt as to whether this algorithm was covered by a U.S. patent, but the patent in question entered the public domain in 1997. ElGamal is based on the discrete logarithm problem, which is at least as hard a problem as factoring, so this algorithm should provide similar security to RSA. Currently, we believe these two algorithms (RSA and ElGamal) are the best available choices for highsecrecy traffic. No other algorithms, including Elliptic Curve Cryptography, have been studied as extensively from a mathematical perspective as RSA and ElGamal. Attacks against public keys Another type of attack that is fairly easy to launch against most public key cryptosystems is a "maninthemiddle" attack. Once again consider a situation in which Bob and Alice wish to communicate. Imagine that Pete is able to intercept the exchange of public keys. Pete sends Alice his own key, but misrepresents it as Bob's key. He also sends Bob his own key, misrepresenting it as Alice's key. Pete now intercepts all traffic between Alice and Bob. If Alice sends a message to Bob in the compromised system, she's actually encrypting the message using Pete's public key, even though she thinks she is using Bob's. Pete gets the message, decrypts it, and stores it, so he can read it later. He then encrypts the message using Bob's public key, and sends it on to Bob. Bob gets the message, and is able to decode it, unaware that it really came from Pete, and not Alice. The primary problem here is that Alice has no way of ensuring that the key she obtained actually belongs to Bob. There are several ways to get around this problem, but they tend to involve the presence of a Public Key Infrastructure (PKI). In a generic PKI, a trusted third party keeps track of everybody's key. The public key for that trusted third party is distributed as widely available, and is thus easy to verify. In the future, a trusted thirdparty key like this will probably come embedded in your software. The PKI approach avoids the maninthemiddle attack since a skeptical user can contact the third party (often called a certificate authority) using cryptographic protocols, and request that it tell him whose key was received. This can be verified with reference to the widely known public information about the authority. The PKI strategy works fairly well using a complex protocol. However, there are some drawbacks. First, how can anyone be sure that they can trust the socalled trusted authority? Second, what if someone is able to defraud the authority, say, by hacking their site? The second risk is a biggie. One of the largest certificate authorities is a company called VeriSign (see Resources). VeriSign helps spread trust around by performing background checks on people before deciding to trust them and issuing a public key identity. (Strangely, in the model, a person who wants to be trusted pays for this trust!) The problem is that the identity checking in the early days was on the lax side. Soon after VeriSign introduced their services, some people began registering false identities. For example, someone masqueraded on the Net as Bill Gates. Perhaps the best way to circumvent these problems is to exchange keys in person, or through any medium that you trust completely. Sometimes the phone is sufficient if you know the party well by voice. Beyond the issue of lax identity, verification is a user issue. The problem is that, in practice, people often don't bother to check the results given by the trusted third party. This problem is extremely widespread, especially in the use of "secure" Web sites. Is your order secure? This problem permeates practical implementations of public key cryptography. For example, many uses of Netscape's Secure Socket Layer (SSL), a publickey based library used in Netscape's and many other applications, suffer from the "not checking who you're connected to" problem. People get a warm fuzzy because they're using SSL to encrypt data, but they don't realize that the validation problem still exists. Unless an application has a reliable method of distributing public keys, such as personal delivery, it is never a good idea to blindly trust that you have a correct public key. Don't go for the passive Netscape solution where data from the trusted authority is only available by clicking on the icon. Put the information right in the user's view. Sure, it's another dialog box, and a hassle for the user. But hopefully the user will understand the importance of checking the dialog box, and won't just get rid of it without reading it. Even if that happens, you've at least made a good effort on your end. We firmly believe any maninthemiddle attacks made against Netscape are more the responsibility of Netscape than the end user, since Netscape doesn't make a good effort to get users to validate the party with whom they're communicating. If there were an "in your face" dialog, then the responsibility would shift to the user, in our opinion. Dismissing the dialog box without reading it is something the user does at his own risk.

About IBM  Privacy  Terms of use  Contact 