IBM®
Skip to main content
    Country/region [select]      Terms of use
 
 
      
     Home      Products      Services & solutions      Support & downloads      My account     

developerWorks > Security >
developerWorks
Make your software behave: Tried and true encryption
e-mail it!
Contents:
Symmetric cryptography
Types of symmetric algorithms
Security of symmetric algorithms
Data Encryption Standard and Advanced Encryption Standard
Public key cryptography
The RSA algorithm
Attacks against public keys
Is your order secure?
Resources
About the authors
Rate this article
Subscriptions:
dW newsletters
dW Subscription
(CDs and downloads)
The only path to secure software is to use established, rock-solid cryptographic algorithms

Gary McGraw, Vice president of corporate technology, Reliable Software Technologies
John Viega, Senior Research Associate, Reliable Software Technologies

06 Jun 2000

In this second of three articles on cryptography, Gary and John discuss which tried and tested encryption technologies you should consider using -- and why.

In our last installment, we introduced the ideas behind cryptography and explained why it is useful when developing security-critical code. The number one lesson from last time is never to roll your own cryptographic algorithms. The best bet is to borrow liberally from the good ideas that cryptography specialists have invented and tested over the years.

One of the most common decisions facing a developer when choosing a cryptographic solution is whether to use a symmetric key algorithm, like DES, or public key cryptography, like RSA or ECC. In this column, we provide a quick introduction to both kinds of algorithms and arm you with some information about common tradeoffs involved in choosing one or the other. (Remember that although we apply and analyze software that uses cryptography all the time, we're not cryptographers. When in doubt, seek more information.)

Symmetric cryptography
Symmetric algorithms for cryptography are primarily intended for data confidentiality, and as a side effect, data integrity. They use a single key shared by two communicating parties in their computation. The shared key must remain secret to ensure the confidentiality of the encrypted text.

In a symmetric cipher, the same key is used both to encrypt and to decrypt a plaintext message. The message and the key are provided as input to the encryption algorithm, producing ciphertext that can safely be transmitted over an insecure medium (like, say, the Internet). On the other side, the decryption algorithm (which is necessarily closely related to the encryption algorithm) takes the ciphertext and the same secret key as its inputs and produces the original message.

A high-level overview of symmetric algorithms is shown below in Figure 1 .

Figure 1: A high-level overview of symmetric algorithms

Since both parties in a symmetric cipher communication must possess the same key, and the key must remain secret, serious arrangements need to be made in order to distribute the secret key securely. If the key does not remain secret, the algorithm loses all of its power. One way to distribute a secret key is to copy it to a floppy disk, and deliver it to the person with whom you wish to communicate securely. The vulnerability inherent in this method of key distribution is the biggest disadvantage of symmetric algorithms -- lose the disk, and all bets are off.

Diffie-Hellman is one of several well-known protocols that exist for distributing a symmetric key over an insecure medium (see Resources). However, when using these protocols, you must be aware of the requirement for good authentication. Exchanging keys with a remote server securely is certainly possible, but there may be no confirmation (unless you require it) that you are sending a key to the correct entity. Perhaps counter-intuitively, the most common way keys are exchanged for symmetric ciphers is through use of public key cryptography (discussed below).

Types of symmetric algorithms
Two main categories of symmetric algorithms are block ciphers and stream ciphers. Block ciphers break up a message into constant-sized blocks (most often blocks of 40, 56, 64, or 128 bits). The size of the output is usually the same size as the plaintext. If a plaintext message isn't perfectly aligned with the block size (that is, a whole number factor of the block size), it is usually padded to an appropriate size by appending dummy data. The encryption algorithm itself is most often responsible for any padding operation. In contrast to block ciphers, stream ciphers encrypt a single bit at a time. Stream ciphers tend to be much faster than block ciphers, but most of the well-known and well-studied symmetric algorithms in common use are block ciphers.

In the easiest to understand block cipher, each block of data is encrypted separately. This type of cipher is said to work in Electronic Code Book (ECB) mode. Be forewarned that this obvious approach presents some security risks. For example, suppose that every block of 64 bits in a data file is encrypted separately. Every time the 64-bit plaintext string "security" (assume 8 bits per character) gets encrypted, it will encrypt to the exact same ciphertext string. If an attacker sees the plaintext for a sample message, and the message happens to include the word "security" perfectly aligned on a 64-bit boundary, every message with "security" so aligned will immediately be apparent to an attacker.

Information such as the way a specific word is encoded can help an attacker immensely, depending on the circumstances. In one attack, a bad guy can modify encrypted data without any knowledge of the key used to encrypt the information by inserting previously recorded blocks into a new message. To delve a bit more deeply, consider a money payment system where deposits are sent in a strictly encrypted block-cipher format. In our example, the first 128 bits represents the account number to deposit the money into. The rest of the message encodes the amount of the deposit, the time at which the message was sent, and perhaps some other information. If an attacker knows that the first 128 bits represents the account number, and if the attacker happens also to know that a target account number encrypts to a particular string, the attacker can modify messages in transit to divert a deposit into the target account. The attack works by replacing the real account number in its encrypted format with the attacker's own account number (also in encrypted form). Oops!

Stream ciphers don't suffer from this sort of "replacement" problem precisely because of the way they are designed. All hope is not lost for block ciphers, though. Block ciphers can be modified to mitigate the risk we outlined above. There are several different strategies that avoid the problem. For example, in Cipher Block Chaining (CBC) mode, blocks are still encrypted one at a time, but the plaintext for each block is XORed with the ciphertext of the previous block before being encrypted. This is a very common approach to block ciphers. In fact, CBC mode is the default mode for many block ciphers. Another mode, called counter mode, uses an arbitrary but reproducible sequence of numbers as an additional input to the cryptographic algorithm. The particular sequence doesn't matter very much. Usually a pseudo-random sequence seeded with the clock time is more than sufficient. The counter is then mixed with the plaintext before encrypting. In any such approach, the counter cannot repeat a digit frequently, or the advantage disappears. Using CBC or counter mode in a block cipher helps mitigate the risks incurred when the same plaintext block appears in multiple places in a message (or across multiple messages) by ensuring that the corresponding ciphertext blocks will be different.

In most cases, the modes we have discussed are built right into an algorithm implementation, and it's possible to choose which mode to use in your particular application. We recommend against using ECB mode, but beyond that general advice, picking a mode depends on circumstances. Each mode tends to have its own security implications. CBC presents a reasonable solution, except that attackers can add gibberish to the end of an encrypted message, or introduce it into the middle of a message. You can avoid a problem like this with two preventative measures. First, make sure you know where your messages end. Encode that information at the start of your message, or make it implicit in your protocol. Make sure the receiving end checks the length information before proceeding. Second, add some sort of checksum to the data, so that you can detect cases in which the middle of a message has been modified. (Cryptographic checksums are perfect for this purpose, and we'll discuss them in a future topic, "cryptographic hashing algorithms".) These precautions help mitigate message tampering problems.

Applied Cryptography

According to cryptographic expert Bruce Schneier, really long messages might still make picking out patterns possible, but a risky message would have to be at least 34 gigabytes, in the case of a 64-bit block, before this would even start to become an issue. Spending some time to do research to determine which mode is best for your particular application is a good idea, especially since each mode has different efficiency and fault-tolerance considerations. An excellent discussion of the pros and cons of each common block cipher mode is available in Chapter 9 of Bruce Schneier's Applied Cryptography (see Resources).

Security of symmetric algorithms
If we discount the problem of keeping secret keys secret, the security of symmetric block ciphers depends on two major factors. The first and most important factor is the quality of the algorithm. The second factor, of far lesser importance, is the length of the key used.

Block size can also be a factor. 64-bit blocks may not be secure enough, but 128-bit blocks should be more than adequate.

The research community has done lots of work on developing secure symmetric ciphers. But demonstrating the security of a cipher is still an extremely hard problem. No practical cryptographic algorithm is completely secure.

One perfect encryption algorithm exists, called a "one-time pad." With this algorithm, an n-bit plaintext message is XORed with an n-bit random number (resulting in an n-bit ciphertext). In this case, if a 64-bit message is intercepted, and if the key is truly random, all 264 possible bit strings are equally likely. There is no information in the ciphertext that reveals any of the structure of the plaintext, except its original length. In a one-time pad, you can never reuse the same key, or the security of the scheme goes away. Also, as with any symmetric algorithm, there is a hefty key distribution problem. One-time pads are not considered practical enough due to the large key sizes that are necessary. Imagine trying to encrypt data over a network with the capacity for gigabits per second using a one-time pad! Then again, disk space is becoming cheaper and cheaper. One-time pads may become more popular as bandwidth and storage prices continue to drop.

The ciphertext always reveals information about the original plaintext that can be located without possessing the key. An attacker's challenge is to recognize the leaking information.

An important goal of any cryptographic algorithm is to make cryptanalysis extremely difficult. Other important goals include speed and memory minimization. Unfortunately, making cryptanalysis difficult is not easy to do. It's not impossible for good cryptographers to design an algorithm resilient to all known forms of attack. Moreover, it is far more difficult to design against types of attacks that are completely unknown. Many people believe, but no one knows for sure, that the National Security Agency (NSA) has developed sophisticated attacks against general block ciphers that they have not shared with the rest of the world. Also, there is no telling what sorts of attacks against any algorithm will be discovered in the coming years. The best that cryptanalysts can do is analyze ciphers relative to known attacks, and judge them that way.

When it comes to key length, 128 bits is generally considered more than adequate for messages that need to be protected for a typical human lifespan (assuming no other attacks can be mounted against the algorithm except a brute-force attack, of course). In addition, 112 bits is considered adequate. To be safe, you may wish to consider 256 bits, which is believed to be secure enough that a computer made of all the matter in the universe computing for the entire lifetime of the universe would have an infinitesimal probability of finding a key by brute force. On the opposite end of the spectrum, 64 bits is considered too small a key for high-security applications. According to Schneier, in 1995 someone willing to spend $100 billion could break a 64-bit key in under a minute. Reasonable computational resources can now break such a key in well under a year. And 40 bits is considered only marginally better than no security at all.

Data Encryption Standard and Advanced Encryption Standard
Symmetric algorithms are in widespread use, and have been studied extensively by scientists. There are literally hundreds of such algorithms, a few of them good, some of them bad. The most commonly used symmetric algorithm is the Data Encryption Standard (DES), a block cipher that uses 56-bit keys. It was created by IBM (and partners), under the guidance of the NSA, and has been a U.S. government standard for many years.

Key size

The key is actually 64 bits, but eight of the key bits are parity bits. Since the parity bits are a function of other bits of the key, they provide no added cryptographic security, meaning DES keys are effectively 56 bits.

Many modern ciphers have been patterned after DES, but few have stood up as well to cryptanalysis as DES has. The main problem with DES is the very short key length, which is completely inadequate in this day and age.

It is possible to adapt DES with its short key length to be more secure, but this can't be done arbitrarily. One (bad) idea that many people try involves applying DES twice -- something known as double encryption. In such a scheme, a message in encrypted once using one key, then encrypted again (ciphertext to modified ciphertext) using a second key. A very subtle form of attack turns out to render this kind of double encryption not much better than single encryption. In fact, with certain types of ciphers, multiple encryption is probably no better than single encryption (a class called closed ciphers). You should avoid closed ciphers because they are more prone to attack than most.

Although double encryption isn't very effective, it turns out that triple encryption is about as effective as one might naively expect double encryption to be. For example, 56-bit DES, when triple-encrypted, yields 112 bits of strength, which is believed to be more than adequate for any application. Triple-encrypted DES, or simply triple DES (often seen written as 3DES) is a popular modern symmetric block algorithm.

Triple DES is not a panacea, though. One problem with triple DES is its speed, or lack thereof. Partially because of the speed issue, the U.S. Commerce Department's National Institute of Standards and Technology has initiated a competition for the Advanced Encryption Standard (AES). So far, NIST has selected five algorithms as finalists for the standard. The winner will be announced before the end of 2000.

One risk of the AES competition is that all the algorithms are relatively new. No amount of scrutiny in the short lifetime of the candidate algorithms can compare to the intense scrutiny that has been placed on DES over the years. Nonetheless, some people still believe there is a slight risk that the NSA placed a hidden "trap door" in DES that makes it easy for them to attack. These people are likely to embrace a standard in which the NSA had no hand (however, the NSA has very strong ties with NIST). And, in truth, it is very likely that the AES winner will be at least as good an algorithm (for use over the next 50 years) as DES has been since its introduction, despite a relatively short lifetime.

Although triple DES has some performance issues, for the time being it is a highly recommended solution. A big advantage to this algorithm is that it is free for any use. You can download several good implementations of DES from the Internet. The AES winner will also be free for any use, with at least one free reference implementation. Many of the five candidates are already free. The AES candidates are all much faster than DES for most applications.

Triple DES

Although DES is technically the standard, triple DES is considered by most experts to be the de-facto interim standard until the AES competition is complete.

Plenty of commercial symmetric algorithms are also available. Since these algorithms are proprietary, they tend not to be as well-analyzed, at least publicly. But some proprietary algorithms offer excellent security, and run quite efficiently. Nonetheless, we see no overwhelming reason to recommend anything other than the standard for most applications -- triple DES for now, and the AES winner once it has been announced.

One problem with a symmetric key solution is the requirement that each pair of communicating agents needs a unique key. This presents a key management nightmare in a situation with lots of users. Every unique pair of communicating entities needs a unique key! As a way around this problem, some designers turn to key derivation algorithms. The idea is to use a master key to derive a unique key for each communicating pair. Most often, key derivation uses some unique user identification information (such as user serial number) to transform the master key. The inherent risk in this scheme is obvious. If the master key is compromised, all bets are off. In fact, if even one derived key can be cracked, this may provide enough information to attack the master key. Regardless of this risk, many systems rely on symmetric algorithms with key derivation to control cryptographic costs.

In a related practice, designers often make use of session keys instead of using the same symmetric key for all encrypted communication between two agents. As in the derived key above, the idea is to use a master key for each communicating pair, perhaps itself derived from a global master key to derive a session key. In case the session key is compromised, the system can continue to be useful. Once again, the idea of session key derivation from a master communicating-pair key presents risks to the entire system.

Public key cryptography
As we mentioned in the previous section, one of the biggest problems with symmetric cryptography is the key distribution problem. The difficulty is figuring out how to exchange a secret key securely with a remote party over an insecure medium. Public key cryptography circumvents the key exchange problem completely. In a public key system, a pair of keys, instead of copies of one single key, is used for cryptographic operations. One of the two keys is made available publicly, and is called the public key. This key can be given away freely, put on a Web page, broadcast, and so on. A public key is used to encrypt messages. The second key, unlike the first, must remain secret. It is called the private key and it is used to decrypt messages. The security of the private key is as important as key security in a symmetric algorithm. The essential difference between the algorithmic approaches is that in public key cryptography, the private key never needs to be shared with anyone, thus alleviating the key distribution problem.

In a public key system, a message is encrypted by the sender using the public key of the receiver. Barring a weakness in the algorithm, and assuming the algorithm is implemented properly, the only person who should be able to decrypt a message encrypted in this way is the person who possesses the associated private key. This system is analogous to a mailbox into which everyone can place mail. In this analogy, no one can easily retrieve mail from the box unless they possess the carefully guarded secret key that opens the box.

A graphical overview of public key cryptography, illustrating what happens when Alice sends a message to Bob, is shown below in Figure 2 .

Figure 2 gives a graphical overview of public key cryptography, showing what happens when Alice sends a message to Bob.

The Achilles' heel of public key algorithms is that encryption and decryption of messages tends to be incredibly slow relative to symmetric key algorithms. In general, software implementations of public key algorithms tend to be around 100 times slower than DES, and 1,000 times slower when each is implemented in hardware. For this reason, it's not practical to encrypt large messages in a timely manner using public key cryptography.

Fortunately, encrypting small messages does seem to fit within acceptable bounds. As a result, people tend to mix the symmetric and public key algorithms together in practice. In such a mix, most communication is carried out with a relatively fast symmetric algorithm. But the high-risk part, exchanging the symmetric key, makes use of a public key algorithm. As we alluded to above, this is a great way to avoid the key distribution problem. It also addresses the key derivation risks cited at the end of the symmetric key section (at least for platforms with reasonable computational resources). Given a solid way to distribute keys in a reasonable time, there is little reason to turn to key derivation and take on the associated risks.

Let's consider an example of a mixed approach in which Alice wishes to initiate communication with Bob. Alice and Bob will communicate mostly using a symmetric cipher such as triple DES. However, first they need to exchange a secret key to last the lifetime of their session before they can use the cipher. The secret key they need to share is called the session key. Hopefully, Alice will generate this key using a secure source of randomness, and not a poorly designed key derivation algorithm. (See our series of articles on random number generation, here on developerWorks, for more information on randomness.) To get the session key to Bob in a secure manner, Alice employs public key cryptography. First, she looks up Bob's public key, and uses it to encrypt the session key. She sends the resulting ciphertext to Bob. Bob uses his private key to decrypt the ciphertext, giving him a copy of the session key. All subsequent communication is done using triple DES and the session key (a shared secret).

The RSA algorithm
The most famous public key algorithm is the RSA algorithm. The general idea behind RSA starts with picking two large prime numbers, p and q. These numbers remain secret, but you can publish their product n, along with some additional information. The published information is used to encrypt a message. As a result of the mathematics involved, only someone who knows p and q can decrypt the message. Usually p, q, and n are all exceptionally large prime numbers with hundreds of digits in them.

The security of the RSA algorithm is believed to be equivalent to the difficulty of factoring n to get p and q. Factoring large numbers is believed to be really difficult to do, although this generally accepted claim has never been proven conclusively. Nonetheless, RSA has stood up well to public scrutiny for almost twenty years, so people have gained faith in it.

Many people intuitively believe that there is a small number of very large primes, and go on to reason that there are likely to be many problems with RSA's system. Why couldn't an attacker simply create a database of all possible keys? Each possible key could be tried in a brute force attack. The bad news is that this sort of attack is possible. But the good news is that there are far more prime numbers than most people may suspect. There are about 10151 primes of length up to 512 bits, which means there are enough primes of up to 512 bits to assign every atom in the universe 1074 prime numbers without ever repeating one of those primes.

In RSA, security is mostly dependent on how difficult the composite of two prime numbers is to factor. Recall that in a symmetric cryptosystem we said that 256-bit keys are believed to be large enough to provide security for any application through the end of time. With RSA and other public key cryptosystems, the same heuristic does not apply. In fact, it is very difficult to say what a good public key system length will be in 50 years, never mind any farther down the road. Obviously, a 256-bit number represents fewer possible RSA keys than 2256, since clearly not every number is prime. So, the size of the space is considerably smaller for a brute-force attack. In fact, it turns out that factoring a 256-bit number directly does not take an unreasonable amount of resources. In the end, comparing public key length to symmetric key length directly is like comparing apples and oranges.

One major concern is our inability to predict what kinds of advances researchers will make in factoring technology. Many years ago, it was believed no one would ever have the resources necessary to factor a 125-bit number. Now anyone willing to spend just a few months and a few million dollars can factor a 512-bit number. As a good security skeptic, you should use no less than a 2048-bit key for the foreseeable future. That is, use 2048 bits if your data needs to remain secure for long periods of time (such as two or more years). 1024-bit keys are appropriate for other uses for the time being. Just keep in mind that a good chance exists for even 1024-bit keys to be easily factored in just a few years. If you're reading this in 2005, you should check to make sure that 2048-bit keys are still considered sufficient for practical uses, because they may not be!

What's the drawback of using a very long key? With public key cryptography, the longer the key required, the longer it takes to generate it. While you might be well served with a huge (say, 100,000-bit) key, you are really not likely to want to wait around long enough to generate the key. Currently, generating a 2048-bit key takes several minutes on most machines. Even that smallish size pushes the limits an average user is willing to wait.

Some non-technical concerns are associated with RSA. Tops among these is that RSA is currently covered by a U.S. patent. Use of RSA requires a license. Thankfully, the patent situation is about to change. The patent on the RSA algorithm expires on September 20, 2000, and cannot be renewed. By the time you read this, the algorithm may be in the public domain.

In the meantime, the ElGamal algorithm, which is unquestionably in the public domain, is a good alternative. There was once some doubt as to whether this algorithm was covered by a U.S. patent, but the patent in question entered the public domain in 1997. ElGamal is based on the discrete logarithm problem, which is at least as hard a problem as factoring, so this algorithm should provide similar security to RSA. Currently, we believe these two algorithms (RSA and ElGamal) are the best available choices for high-secrecy traffic. No other algorithms, including Elliptic Curve Cryptography, have been studied as extensively from a mathematical perspective as RSA and ElGamal.

Attacks against public keys
Public key cryptosystems are susceptible to chosen-plaintext attacks, especially when there is only a small number of possible messages that can be sent (as constrained by a program design, for example). Symmetric algorithms tend to be much more resilient to such attacks. The good news is, in a mixed approach such as the one we described above, if you are only using public key cryptography to encrypt a session key, then you have little to worry about.

Another type of attack that is fairly easy to launch against most public key cryptosystems is a "man-in-the-middle" attack. Once again consider a situation in which Bob and Alice wish to communicate. Imagine that Pete is able to intercept the exchange of public keys. Pete sends Alice his own key, but misrepresents it as Bob's key. He also sends Bob his own key, misrepresenting it as Alice's key. Pete now intercepts all traffic between Alice and Bob. If Alice sends a message to Bob in the compromised system, she's actually encrypting the message using Pete's public key, even though she thinks she is using Bob's. Pete gets the message, decrypts it, and stores it, so he can read it later. He then encrypts the message using Bob's public key, and sends it on to Bob. Bob gets the message, and is able to decode it, unaware that it really came from Pete, and not Alice.

The primary problem here is that Alice has no way of ensuring that the key she obtained actually belongs to Bob. There are several ways to get around this problem, but they tend to involve the presence of a Public Key Infrastructure (PKI). In a generic PKI, a trusted third party keeps track of everybody's key. The public key for that trusted third party is distributed as widely available, and is thus easy to verify. In the future, a trusted third-party key like this will probably come embedded in your software.

The PKI approach avoids the man-in-the-middle attack since a skeptical user can contact the third party (often called a certificate authority) using cryptographic protocols, and request that it tell him whose key was received. This can be verified with reference to the widely known public information about the authority.

The PKI strategy works fairly well using a complex protocol. However, there are some drawbacks. First, how can anyone be sure that they can trust the so-called trusted authority? Second, what if someone is able to defraud the authority, say, by hacking their site? The second risk is a biggie.

One of the largest certificate authorities is a company called VeriSign (see Resources). VeriSign helps spread trust around by performing background checks on people before deciding to trust them and issuing a public key identity. (Strangely, in the model, a person who wants to be trusted pays for this trust!) The problem is that the identity checking in the early days was on the lax side. Soon after VeriSign introduced their services, some people began registering false identities. For example, someone masqueraded on the Net as Bill Gates. Perhaps the best way to circumvent these problems is to exchange keys in person, or through any medium that you trust completely. Sometimes the phone is sufficient if you know the party well by voice.

Beyond the issue of lax identity, verification is a user issue. The problem is that, in practice, people often don't bother to check the results given by the trusted third party. This problem is extremely widespread, especially in the use of "secure" Web sites.

Is your order secure?
Here's a common scenario. Say you want to buy some books from Amazon.com. When you go to check out using Netscape, you'll notice the little lock icon in the left-hand corner of the browser switches from "unlocked" to "locked," indicating that you are talking to Amazon.com over an encrypted channel. Or are you? Not necessarily. The lock just indicates that you're talking to someone over an encrypted channel. You can find out who the trusted third party claims to be by clicking on the lock, then clicking on "View Certificate." If you're not subject to a man-in-the-middle attack (like Web spoofing), you will see Amazon.com's information displayed in the certificate window. But if a man-in-the-middle attack is taking place, or if some second party is completely spoofing Amazon.com's look and feel, and you're not really talking to Amazon.com at all, you will see different information. By actually bothering to check, you can be pretty sure whether or not you're talking to the party you intended. Unfortunately, in practice, most people never check this information! They just see the lock, and feel good about it. If you don't check the certificate, you can't have any assurance as to the security of its originator.

This problem permeates practical implementations of public key cryptography. For example, many uses of Netscape's Secure Socket Layer (SSL), a public-key based library used in Netscape's and many other applications, suffer from the "not checking who you're connected to" problem. People get a warm fuzzy because they're using SSL to encrypt data, but they don't realize that the validation problem still exists.

Unless an application has a reliable method of distributing public keys, such as personal delivery, it is never a good idea to blindly trust that you have a correct public key. Don't go for the passive Netscape solution where data from the trusted authority is only available by clicking on the icon. Put the information right in the user's view. Sure, it's another dialog box, and a hassle for the user. But hopefully the user will understand the importance of checking the dialog box, and won't just get rid of it without reading it. Even if that happens, you've at least made a good effort on your end. We firmly believe any man-in-the-middle attacks made against Netscape are more the responsibility of Netscape than the end user, since Netscape doesn't make a good effort to get users to validate the party with whom they're communicating. If there were an "in your face" dialog, then the responsibility would shift to the user, in our opinion. Dismissing the dialog box without reading it is something the user does at his own risk.

Resources

About the authors
Gary McGraw is the vice president of corporate technology at Reliable Software Technologies in Dulles, VA. Working with Consulting Services and Research, he helps set technology research and development direction. McGraw began his career at Reliable Software Technologies as a Research Scientist, and he continues to pursue research in Software Engineering and computer security. He holds a dual Ph.D. in Cognitive Science and Computer Science from Indiana University and a B.A. in Philosophy from the University of Virginia. He has written more than 40 peer-reviewed articles for technical publications, consults with major e-commerce vendors including Visa and the Federal Reserve, and has served as principal investigator on grants from Air Force Research Labs, DARPA, National Science Foundation, and NIST's Advanced Technology Program.
McGraw is a noted authority on mobile code security and co-authored both "Java Security: Hostile Applets, Holes, & Antidotes" (Wiley, 1996) and "Securing Java: Getting down to business with mobile code" (Wiley, 1999) with Professor Ed Felten of Princeton. Along with RST co-founder and Chief Scientist Dr. Jeffrey Voas, McGraw wrote "Software Fault Injection: Inoculating Programs Against Errors" (Wiley, 1998). McGraw regularly contributes to popular trade publications and is often quoted in national press articles.


John Viega is a Senior Research Associate, Software Security Group co-founder, and Senior Consultant at Reliable Software Technologies. He is the Principal Investigator on a DARPA-sponsored grant for developing security extensions for standard programming languages. John has authored over 30 technical publications in the areas of software security and testing. He is responsible for finding several well-publicized security vulnerabilities in major network and e-commerce products, including a recent break in Netscape's security. He is also a prominent member of the open-source software community, having written Mailman, the GNU Mailing List Manager, and, more recently, ITS4, a tool for finding security vulnerabilities in C and C++ code. Viega holds a M.S. in Computer Science from the University of Virginia.



e-mail it!
Rate this article

This content was helpful to me:

Strongly disagree (1)Disagree (2)Neutral (3)Agree (4)Strongly agree (5)

Comments?



developerWorks > Security >
developerWorks
  About IBM  |  Privacy  |  Terms of use  |  Contact