This article is an excerpt from the new book IBM WebSphere: Deployment and Advanced Configuration, by Roland Barcia, Bill Hines, Tom Alcott, and Keys Botzum, to be published by IBM Press in August 2004. The text has been edited and formatted for publication as a standalone article.
In this article, we will cover several aspects of security. We will briefly discuss why security is important and then detail the WebSphere Application Server security architecture. We will cover some subtle points of WebSphere Application Server security and then move on to detailed discussions of how to harden a WebSphere Application Server environment to be secure. Finally, we will provide some tips for troubleshooting security problems. Given the limited space available, much of this material will be high level and will not delve into the details. Wherever possible, we will point the reader to appropriate references for detailed information.
Hopefully, most readers realize that security is a key aspect of enterprise systems. Nonetheless, we will briefly justify security in order to introduce some common ways of thinking about it.
The fundamental purpose of security is to keep the bad people out of your systems. More precisely, security is the process of applying various techniques to prevent unauthorized parties, known as intruders, from gaining unauthorized access to things.
There are many types of intruders out there: foreign spy agencies, corporations in competition with you, hackers, and even your own employees. Each of these intruders has different motivations, different skills and knowledge, different access, and different levels of need. For example:
- An employee may have a grudge against the company, and while employees have tremendous levels of internal access and system knowledge, they probably have limited resources and hacking skill.
- An external hacker is probably an expert in security attacks, but may not have any particular grudge against you.
- A foreign spy agency, depending on your business, may have a great deal of interest in you and possess tremendous resources.
Intruders may be after your systems for one of two reasons: to gain access to information that they should not have, or to alter the behavior of a system in some way. In the latter case, by changing the system behavior, they may seek to perform transactions that benefit them, or they may wish to simply cause your system to fail in some interesting way in an effort to harm your organization.
As you can see, there are many different types of intruders and motivations and, as we will discuss later, many different types of attacks. As you plan your security, you need to be aware of this.
We also want to emphasize that security should not be seen as simply a gate that keeps the "outsiders" out. That is far too simplistic a view. Many organizations today focus their security efforts entirely on people outside of the organization in the mistaken belief that only outsiders are a danger. This is simply not the case. People within your organization are also very likely to attack your systems. Several recent studies have indicated that perhaps as many as half of all break-ins are caused by or involve employees or contractors within the organization. It is crucial that your security efforts protect your systems from all potential intruders. This is why this article is so long. Security consists of more than just some firewalls at the edge of your network protecting you from the "outside." It is a difficult and complex set of actions and procedures that strive to strengthen your systems as much as appropriate.
Limits and reality
It is important to realize that there is no such thing as a perfectly secure system. Your goal is to protect the system as well as you can within the constraints of the business. When thinking about security, you ideally should:
- Analyze the various points of attack.
- Consider the risk of an attack at each point.
- Determine the potential for damage from a successful attack that results in a security breach.
- Estimate the cost of preventing each attack.
When estimating the damage of a security breach, never forget that security breaches can cause loss of faith in users of the system. Thus, the "cost of security breach" may include very high indirect costs (such as, loss of investor confidence).
Once you have performed the steps listed above, you can then determine appropriate tradeoffs of risk versus cost. Essentially, the goal is to make the cost to the intruder of breaking into your system exceed the value of what is gained, while at the same time ensuring that the business can bear the costs of running the secure system. (This can be a problem when some hackers break into systems simply for the fun of it. What you can hope is that by creating a reasonably secure environment, intruders will move on to easier targets.) Ultimately, the level of security required is a business -- not a technical -- decision. However, as technicians, we must help all parties understand the value and importance of security.
Security is a large topic, and it is impossible to completely cover all aspects of security in this article. This article is not intended to be an introduction to security or a tutorial on how to secure WebSphere Application Server-based systems. Rather, it is a high-level overview or checklist of the core technical issues as they relate to WebSphere Application Server that need to be considered. The information in this article should be used in conjunction with a much larger effort of creating a secure enterprise.
Readers interested in learning more should refer to the Resources section. In particular, Enterprise Application Security provides a high-level, if somewhat dated, overview of the basics of application security.
Since this is a technical article, we are focused on technical solutions to securing systems. In fact, we are focused primarily on the WebSphere Application Server piece of the security puzzle. Nonetheless, you should be aware that it is often easier to compromise systems using social engineering techniques. That is, by tricking the human beings that work for your organization, attackers are able to gain access to systems and information to which they should not have access. (See What is social engineering? for more.)
Perhaps the one conclusion of relevance to this article's discussion that we can learn from social engineering attack techniques is the fact that by using social engineering, your attackers may be coming from within your network. This again serves to emphasize the earlier point that security that is focused solely on keeping the intruders out of the network is foolishly insufficient. This is why the discussion here will focus on security at multiple levels. Each level tries to thwart different types of attacks and also provides more barriers to attackers.
WebSphere Application Server security architecture
We assume the reader is familiar with J2EE security and the basics of security. If you are not, refer to the J2EE specification for details on how security is specified in a J2EE application (as well as Enterprise Application Security). Here, we are concerned with how WebSphere Application Server implements security. We will not delve into low-level details, as they are generally irrelevant, but it is helpful to understand at a high level how the WebSphere Application Server security infrastructure works. This will aid in defining a secure infrastructure and in troubleshooting.
As with any secure system, WebSphere Application Server provides functions for authentication, authorization, and data protection. WebSphere Application Server provides for three forms of authentication:
- client certificates
- identity assertion.
WebSphere Application Server implements the required J2EE authorization methods and also provides for plugging in Tivoli Access Manager as an external authorization engine. In most cases, WebSphere Application Server uses SSL for data protection when transmitting information over a network connection. We will now discuss each form of authentication in greater detail.
There are two authentication cases to consider:
- Web client authentication
- EJB™ client authentication.
We will not be discussing Web services authentication. Other than a brief mention of the JAAS client authentication APIs (for use in EJB clients), we will not be discussing JAAS either. This is because while WebSphere Application Server V5 does partially support JAAS and custom login modules, JAAS is only able to supplement the WebSphere Application Server authentication process. Since the JAAS custom login modules cannot alter or replace the existing WebSphere Application Server authentication tokens, a JAAS login module can add custom attributes to a Subject, but the WebSphere Application Server login module still needs to be executed as well to achieve proper authentication. Figure 1 shows the basic architecture of the WebSphere Application Server authentication infrastructure.
Figure 1. WebSphere Application Server authentication architecture
The most common way for Web clients to authenticate is by providing a userid and password (as HTTP Basic Auth or Form-based). WebSphere Application Server takes this information, looks up the user's unique ID (for example, a DN for LDAP) in the registry, and then verifies the password against the registry. In the case of LDAP, an
ldap_bind is performed.
Web clients can also authenticate using client certificates. As with any SSL system, client certificate authentication is done at the termination of the SSL connection. Thus, the Web server is responsible for performing the client certificate authentication, rather than WebSphere Application Server. Once the certificate authentication is complete, the WebSphere Application Server Web server plug-in forwards the client certificate information to the WebSphere application server, and the application server then extracts information from the certificate and looks up the user in the registry. The information used for the lookup is customizable, and can be totally customized if a custom registry is developed.
Trust Association Interceptors
Web clients can also authenticate by using a Trust Association Interceptor (TAI). Essentially, by using a TAI, WebSphere Application Server allows an external component to authenticate the user and then assert the identity (identity assertion) to the WebSphere Application Server Web container. You can custom develop a TAI or use one of several that are already commercially available. These TAIs are typically used in conjunction with a Web authentication proxy server, such as IBM's Tivoli Access Manager or Netegrity®'s SiteMinder®. These products authenticate the user and then simply inform WebSphere Application Server as to the end-user's identity. Typically this is done by the proxy server sending the user's ID and some additional verifiable information to the application server. The TAI extracts this information and then returns the user's ID to WebSphere Application Server. WebSphere Application Server then queries the registry as it normally would but does not validate the user's password. (If the userid is not found in the registry, authentication will, of course, fail.) This provides a powerful mechanism for allowing WebSphere Application Server to participate in a Single Sign On domain.
In any case, after the Web authentication is complete (in the TAI or normal Web authentication case), WebSphere Application Server creates a JAAS Subject containing the user's authentication information and an LTPA token. For Web clients, WebSphere Application Server also creates an LTPA cookie to send to the browser. This cookie is essentially a string representation of the LTPA token.
EJB client authentication
EJB clients can authenticate using passwords or certificates. In the case of password-based authentication, the client run time is responsible for obtaining the userid and password and sending them to the server where they are verified against the registry. In any case, if the authentication is determined to be valid, a CSIv2 session is established that contains an LTPA token and is used for future requests. As with Web client authentication, a JAAS Subject is created as well.
By default, the WebSphere Application Server client run time prompts for a userid and password using a graphical dialog box, if one is needed. This behavior can be controlled by editing sas.client.props. You can even specify a userid and password in that file. However, it is best for clients to use the JAAS Login APIs to authenticate after obtaining the userid and password in some appropriate way under application control.
Application code, as well as WebSphere Application Server itself, may also authenticate from within the process, essentially creating an authenticated Subject on-the-fly. To do this, the standard JAAS login APIs are used. When this is done, the same approach is used as in other scenarios. The provided userid and password are validated against the registry, and if validation is successful, a JAAS Subject and LTPA token are created. Though it may not be obvious, this implies that when WebSphere Application Server servers authenticate themselves, they use the same registry for authentication as user-level authentication.
JAAS Subjects and LTPA token
Once the authentication process is completed, WebSphere Application Server creates a JAAS Subject to represent the current authenticated user. This Subject contains the WebSphere Application Server credentials needed by the WebSphere Application Server run time to authorize user access. As you might expect, the information it contains comes from the registry. Subjects are cached in a memory table by each application server.
In addition to the JAAS Subject, WebSphere Application Server creates an LTPA token that is essentially a key into the Subject table. For security reasons, the LTPA token has a finite lifetime and is digitally signed and encrypted. Once the token expires, the user will have to reauthenticate. The token itself contains only the user's unique identifier and a timestamp. Thus, the token itself contain little information of value. This total approach provides for a high degree of security.
Since the LTPA token uniquely identifies a user, if an LTPA token is sent (via a Web request or IIOP request) to an application server that does not have a cached copy of the user's Subject, the target WebSphere application server can re-create the Subject by querying the registry. Once this is done, the Subject will be held in the cache to ensure high performance for future requests. This method ensures Single Sign On at both the Web layer and at the EJB layer.
We have mentioned registries several times. WebSphere Application Server supports three types of registries:
- Operating system
When using an operating system registry, WebSphere Application Server uses native operating system commands to verify user information against the local machine. Generally speaking, operating system registries cannot be used in a multi-node WebSphere Application Server ND environment because each machine has its own registry. (With Windows® hosts and a common domain registry, a single WebSphere Application Server cell can span multiple hosts when using an operating system registry.)
An LDAP registry is by far the preferred approach and is fully supported with a multi-node WebSphere Application Server ND environment. When WebSphere Application Server is configured to use LDAP, it uses the standard LDAP V3 protocols to communicate with the LDAP directory and verify user information.
WebSphere Application Server also supports a custom registry. In the event that you cannot use one of the supported registries, you are free to write your own registry by implementing the UserRegistry interface (in Java™).
In any case, when WebSphere Application Server uses a registry, it performs the following operations as part of the authentication process. Understand that these operations are always performed regardless of other considerations:
- Obtain full user unique identifier
Security information is tracked based on this internal registry identifier rather than the short username that humans generally provide.
- Verify password
This is used if password authentication is being used.
- Obtain user group information
This is used later as part of authorization. (In WebSphere Application Server V5.1.1, a new interface known as TAI++ will be published that will make it possible to avoid this step with TAIs. Basically, the TAI can assert to WebSphere Application Server the group information.)
Be aware that a single WebSphere Application Server cell can have only one registry. This means that all users, including the WebSphere Application Server administrators and the Security Server ID, must be in this one registry. This also means that all group information for those users must be in this same registry. If you require that users be in one of several registries (perhaps you have multiple LDAP servers), you will have to write a custom registry.
In compliance with the specification, WebSphere Application Server implements the J2EE-required authorization model. J2EE applications can use the standard J2EE APIs as well as deployment descriptors to specify authorization information.
It is also possible to externalize WebSphere Application Server's authorization to Tivoli Access Manager and a few other authorization vendors, although we will not cover that here.
Advanced considerations for security configuration
We assume the reader is familiar with the basics of WebSphere Application Server security configuration. If this is not the case, refer to the WebSphere Security Handbook or the WebSphere Application Server Information Center (see Resources). Here, we will discuss a few of the more advanced and perhaps more obscure aspects of configuring WebSphere Application Server security.
SSL, keystores, and iKeyman
SSL is a key component of the WebSphere Application Server security architecture that is used extensively for securing communication. SSL is used to protect HTTP traffic, IIOP traffic, LDAP traffic, and internal SOAP traffic. SSL requires the use of public/private key pairs, and in the case of WebSphere Application Server, these keys are stored in keystores. In order to better understand how to properly configure SSL, we are going to briefly digress into a high-level discussion of SSL and public keys. This discussion is intentionally superficial and discusses only the key points you need in order to properly configure SSL in WebSphere Application Server.
Public Key Cryptography is fundamentally based upon a public/private key pair. These two keys are related cryptographically. The important point is that the keys are asymmetric -- information encrypted with one key can be decrypted using the other key. The private key is, well, private. That is, when you are issued a "certificate, " which sometimes includes a private key as part of the certificate creation process, you must carefully protect that private key. If you create your own public/private key pair prior to requesting that the CA sign the public key, thus creating a certificate, you still must protect that private key. Possessing the private key is proof of identity. The public key is the part of the key pair that can be shared with others.
If there is a secure way to distribute public keys to trusted parties, that would be enough. However, Public Key Cryptography takes things a step further and introduces the idea of signed public keys. A signed public key has a digital signature (quite analogous to a human signature) that states that the signer vouches for the public key. The signer is assuring that the party that possesses the private key corresponding to the signed public key is the party identified by the key. These signed public keys are called certificates. Well-known signers are called Certificate Authorities. It is also possible to sign a public key using itself. These are known as self-signed certificates. Self-signed certificates are no less secure than certificates signed by a certificate authority, they are just harder to manage, as we will see in a moment. Figure 2 shows the basic process of creating a certificate and distributing it.
Figure 2. Server certificate creation and distribution
While looking at Figure 2, note that the client must possess the certificate that signed the generated public key. This is the crucial part of trust. Since the client trusts the CA (since it has the CA certificate), it trusts certificates that the CA has signed. It is worth noting that if you were to use self-signed certificates, you would need to distribute manually the self-signed certificate to each client rather than relying on a well-known public CA certificate. This is no less secure, but if you have many clients, it is much harder to manage.
For a client to authenticate a server using certificates, the server must posses its own private key and corresponding certificate. The client must posses the signing certificate that corresponds to the server's certificate. On the other hand, in order for a server to authenticate a client (often called client certificate authentication), the reverse is true. That is, the client must possess a private key and corresponding certificate, while the server must possess the corresponding signing certificate. That is really all there is to it. Since SSL uses certificate authentication, each side of the SSL connection must possess the appropriate keys in a keystore file. Whenever you configure SSL keystores, think about the fundamental rules about which party needs which keys. Usually, that will tell you what you need.
As we have already seen, WebSphere Application Server manages keys in keystore files. There are two types of keystores:
A truststore is nothing more than a keystore that, by convention, contains only public information. Thus, you should place CA certificates and other signing certificates in a truststore and private information in the keystore. However, it really makes no difference to WebSphere Application Server.
Unfortunately, there is a catch to this simple system. Most of WebSphere Application Server uses the new Java-defined keystore format known as Java Key Stores (JKS). (WebSphere Application Server SSL configurations support three modern key database formats: JKS, JCEKS, and PKCS12.) The IBM HTTP Server and the WebSphere Application Server Web server plug-in use an older key format known as the KDB format (or more correctly the CMS format). The two formats are similar in function but are incompatible in format. Thus you must be careful not to mix them up.
IBM provides a tool known as iKeyman for managing keystores. It is a very
simple tool that creates keystores, generates self-signed certificates,
imports and exports keys, and can generate certificate requests for a CA.
This is the tool you should use when managing keystores. As of WebSphere
Application Server V5.1, a single version of iKeyman supports all of the
needed keystore formats. If you are using an older version of WebSphere
Application Server, the iKeyman included with WebSphere Application Server
bin directory does not support the KDB format. To
create KDB files, you'll have to run the older iKeyman, which should be
available under the GSK5 install directory that is installed implicitly
when you install WebSphere Application Server. (On Windows, the GSK is
c:\program files\ibm\gsk5. On Solaris, it is
/opt/ibm/gsk5. Other platforms are
Web-based administrative interface and certificate
When security is enabled and you connect to WebSphere Application Server using the Web administrative console, your Web browser will likely warn you that the certificate is not trusted and that the hostname does not match the hostname in the certificate. You should see messages like those illustrated in Figures 3 and 4 (showing using the Mozilla Web browser).
Figure 3. Mozilla informing the user that the certificate is not trusted
Figure 4. Mozilla informing the user that the certificate is specifying a hostname different from the one it was expecting
These messages are warnings indicating possible security problems related to the certificates that WebSphere Application Server is using. When you update the default keyring (see Hardening security), you can take steps to prevent these warnings. First, generate a certificate where the Subject name is the same as the hostname of the machine running the administrative Web application (the one server in WebSphere Application Server Base, or the deployment manager in other editions). That will take care of the second browser error message. The first message can be prevented by either buying a certificate from a well-known CA for the WebSphere Application Server's administrative console (this is probably a waste of money) or simply by accepting the certificate permanently. Keep in mind that if you ever see this message again, that is quite possibly a sign of a security breach. Thus, your system administration staff should be trained to recognize that this warning should only occur when the certificate is updated. If it occurs at other times, it is a red flag warning that the system has been compromised.
Advanced LDAP considerations
When configuring WebSphere Application Server to use LDAP, there are a number of issues to consider. First and foremost, recognize that WebSphere Application Server fully supports "standard" LDAP access. This means that in general, WebSphere Application Server can be configured to work with any LDAP server and any reasonable configuration that follows generally accepted LDAP techniques. WebSphere Application Server makes very limited use of LDAP, as mentioned earlier, and is very flexible even in that limited use. By using the LDAP configuration page and the LDAP Advanced Settings page, you can control a great deal of WebSphere Application Server's behavior. Figure 5 shows the main LDAP configuration page.
Figure 5. Main LDAP configuration page
Here are some items of interest regarding LDAP with WebSphere Application Server:
- Host: This is the single host name that WebSphere Application Server will use when connecting to LDAP. If the connection to LDAP should fail, WebSphere Application Server will reconnect to the same host. This means that WebSphere Application Server does not support replicated LDAP directories unless this replication is done transparently, possibly using a load balancer such as Network Dispatcher.
- Base Distinguished Name: This is the start of the LDAP search tree, generally something like "ou=software, o=ibm." WebSphere Application Server will find users and groups by searching under this single root. WebSphere Application Server does not support users and groups being in completely separate trees or LDAP servers.
- SSL Enabled: By specifying this and an SSL
configuration that contains the signing key for the LDAP directory's
certificate, WebSphere Application Server will use SSL when contacting
LDAP. Since WebSphere Application Server sends passwords to LDAP to
ldap_bind, using SSL is very important.
After you have configured the basic settings, including specifying the LDAP directory type, you are typically done. However, if your LDAP directory is not one tested by IBM, or if you have configured your directory in some slightly unusual way, you can control how WebSphere Application Server queries your directory. To do this, click on the Advanced LDAP Settings link. Figure 6 shows what you will see.
Figure 6. Advanced LDAP settings
In Figure 6, what you see is the filters used by WebSphere Application Server when querying LDAP. These filters give you the flexibility to control WebSphere Application Server's queries. Let us take a moment to discuss those settings:
- User Filter: The first filter is used when a user
types in a userid and WebSphere Application Server tries to find the
user entry in LDAP. You will notice that in this example, users are
identified by the
uidfield and are of type
ePerson. You can change either to meet your needs. In some registries, you might change
- Group Filter: This works much like the user filter.
- User Id Map: This states that when WebSphere
Application Server is looking at an LDAP entry for a user, the field
that uniquely identifies the user is the
uidfield. Again, you can change this as needed.
- Group Id Map: This works much like the userid map.
- Group Member Id Map: This specifies how WebSphere Application Server will determine group memberships. This is a list of semicolon separated pairs that are interpreted as either attribute:member or objectclass:attribute pairs. WebSphere Application Server can search for group membership in two ways: searching the groups themselves to determine if the user's DN is listed in the specified attribute on the group or by looking for a special attribute on the user object that contains that user's membership (this is more efficient). WebSphere Application Server considers the Id Map to be the first case if the objectclass value is the same as the objectclass specified in the group filter. Otherwise, the latter case is assumed. This example shows the latter case, where the attribute specified to the left of member is clearly not the group objectclass. Thus, the specified attribute is assumed to be on the user entry object in LDAP and it holds user group memberships. In this way, WebSphere Application Server gets user group membership information from the LDAP directory by examining a single attribute. The directory can create and manage that attribute in any way it sees fit. In the figure, the value ibm-allGroups:member means that when trying to determine group membership for a user, look at the ibm-allGroups attribute on the user entry.
- Certificate Map Mode: If users are authenticating using certificates, this tells WebSphere Application Server how to match the information in the certificate to LDAP in order to find the corresponding user. The default, and most secure, method is to use an exact match on the DN in the certificate into LDAP. If that is not possible, you can specify custom attributes to match against by specifying them in the certificate filter field. (See the WebSphere Application Server Information Center for more.)
There is no option for controlling how WebSphere Application Server
validates passwords. That is because WebSphere Application Server does not
query LDAP for passwords. Instead, it uses the LDAP-defined standard
method known as
ldap_bind. If you have configured your LDAP
directory in some highly non-standard way without passwords, this
obviously will not work. Keep in mind that WebSphere Application Server
has no control or interest in how user passwords are stored or managed in
the LDAP directory. That is entirely under the control of the LDAP
administrator. (For more about LDAP, see Understanding and Deploying LDAP
Directory Services in Resources.)
Now that we have discussed the basics of WebSphere Application Server security, it is time to turn our attention to the real purpose of security: creating a secure environment.
The J2EE 1.3 specification and WebSphere Application Server provide a powerful infrastructure for implementing secure systems. Unfortunately, many people are not aware of all of the issues surrounding creating a secure WebSphere Application Server-based system. There are many degrees of freedom and many different sources of this information. This tends to lead to people to overlook WebSphere Application Server security issues and to deploy systems that are not particularly secure. This section will attempt to summarize the key issues of greatest importance.
Security hardening is the act of configuring WebSphere Application Server, developing applications, and configuring various other related components in a way to maximize security -- in essence, to prevent or block various forms of attack. To do this effectively, it is important to consider the forms of attack.
There are four basic approaches to attacking a J2EE-based system:
- Network-based attacks
These attacks rely on low-level access to network packets and attempt to harm the system by altering this traffic or discovering information from these packets.
- Machine-based attacks
In this case, the intruder has access to a machine on which WebSphere Application Server is running. Here, our goal is to limit the ability to damage the WebSphere Application Server configuration or to see things that should not be seen.
- Application-based external attacks
In this scenario, an intruder uses application-level protocols to access the application, perhaps via a Web browser or EJB client, and uses this access to try to circumvent normal application usage and do inappropriate things. The key is that the attack occurs using the J2EE-defined APIs and protocols. The intruder is not necessarily outside the company, but rather is executing code from outside of the application itself.
- Application-based internal attacks
In this case, we are concerned with the danger of a rogue application. In this scenario, multiple applications share the same WebSphere Application Server infrastructure, and we do not completely trust each application. While perfect security is unachievable here, there are techniques that can limit how much damage each application can do.
We will not be considering one other form of technical attack: Denial of Service (DoS) attacks. While very important, this is beyond the scope of this article. Preventing DoS attacks requires very different techniques.
Total System View: the details matter
Before delving into the specific point-by-point recommendations, we want to take a moment to outline the fundamental techniques for creating secure systems. The fundamental view is to look at every system boundary or point of sharing and examine what actors have access to those boundaries or shared components. That is, given that this boundary exists (we presume reasonable trust within a subsystem), what can an intruder do to break this boundary? Or, given that something is shared, can intruders share something inappropriately? Most boundaries are obvious and physical: network connections, process-to-process communication, file systems, operating system interfaces, and so on, but some boundaries are more subtle. For example, if one application uses J2C resources within WebSphere Application Server, you must consider the possibility that some other application might try to access those same resources. This occurs because there is a system boundary between the first application and WebSphere Application Server, and a second application and WebSphere Application Server. Perhaps both applications can access this common infrastructure (in fact, they can). This is a possible case of inappropriate sharing.
The way we prevent these various forms of attack is to apply a number of well-known techniques. For lower-level network-based attacks, we apply encryption and network filtering. We essentially deny the intruder the ability to see or access things they should not see. We also rely on operating systems to provide mechanisms to protect operating system resources from abuse. For example, we would not want ordinary user-level code to be able to gain access to the system bus and directly read internal communication. We also leverage the fact that most modern operating systems possess fairly robust protections for system APIs. (In a WebSphere Application Server environment, these operating system protections are of limited value since they are based on process identity, which is a very coarse-grained concept when one considers application servers servicing requests from thousands of users at once.) At a high level, we apply authentication and authorization rigorously. Every API, every method, every resource potentially needs to require some form of authorization. That is, access to these things must be restricted based upon need. And, of course, authorization is of little value without robust authentication. Authentication is concerned with knowing the identity of the caller. We add the word robust because authentication that can be easily forged is of little value.
Where appropriate authentication and authorization are not available, we frankly have to resort to clever design and procedures to prevent potential problems. This is how we protect J2C resources. Since WebSphere Application Server does not provide for authorization of access to J2C resources, we instead apply other techniques to limit (based on configuration) the ability of applications to inappropriately reference J2C resources.
As you might imagine, examining all of the system boundaries and shared components is a difficult task and, in fact, securing a system leads to deep thought about complexity. Perhaps the hardest truth about security is that creating a secure system works against abstraction. That is, one of the core principles of good abstraction is the hiding of concerns from higher-level components, which is a highly desirable and good thing. Unfortunately, intruders are not kind to us. They do not care about our abstractions or our good designs. Their goal is to break our systems any way they can, and in doing so, they will look for holes in our wonderful designs. Thus, in order to validate a system's security, you have to think about it at every level of abstraction: at the highest architectural level but also as the lowest level of detail. Rigorous reviews of everything are required.
The smallest mistake can undermine the integrity of an entire system. This is best exemplified by the technique of taking control of C/C++-based systems by using buffer overrun techniques. Essentially, an intruder passes in a string that is too large for some existing buffer. The extra information then overlays a part of the running program and causes the run time to execute instructions that it should not be executing. With care, one could cause a program to do almost anything. As a security architect, to even identify this attack requires a deep understanding of how the C/C++ run time manages memory and executes programs. (For details, see Buffer Overflows -- What Are They and What Can I Do About Them?.) You also have to review every line of code to find this particular hole, assuming you understood that it existed. Today, we know about the attack, yet it continues to be successful because individual programmers make very small, bad decisions that compromise entire systems. Thankfully, this particular attack appears to not be feasible in Java. However, do not think for an instant that there are not other small errors out there that can lead to compromise. Think hard about security; it is hard.
We will now identify the various known steps one should take to protect the WebSphere Application Server infrastructure and applications from these four forms of attack. Ideally, we would organize the information into four buckets, one for each form of attack. Unfortunately, attacks do not neatly divide along those lines. Several different techniques of protection help with multiple forms of attack, and sometimes a single attack may leverage multiple forms of intrusion in order to achieve the end goal. For example, in the simplest case, network sniffing can be used to obtain passwords, and those passwords can then be used to mount an application-level attack. Instead, we will organize the hardening techniques into a logical structure based on when the activity occurs, or the role of the person concerned with these issues:
Actions that can be taken to configure the WebSphere Application Server infrastructure for maximum security. These are typically done once, when the infrastructure is built out and involve only the system administrators.
- Application configuration
Actions that can be taken by application developers and administrators and are visible during the deployment process. Essentially, these are application design and implementation decisions that are visible to the WebSphere Application Server administrator and are verifiable (possibly with some difficulty) as part of the deployment process. This section will have a large number of techniques, further reinforcing the point that security is not a bolt-on; security is the responsibility of every person involved in the application design, development, and deployment.
- Application design and implementation
Actions that are taken by developers and designers during development that are crucial to security but may be difficult to detect as part of the deployment process.
Within each section, we order the various techniques by priority. To help you tie these techniques back to the classes of attack just presented, we will use the following graphic for each technique:
The four squares will be shaded as appropriate to represent the type of attack this technique helps to prevent (Network-based, Machine-based, External application-based, Internal application-based). Keep in mind that internal applications can always take advantage of "external" methods of attack. Thus, we do not explicitly list "I" when "E" is already present. We do, however, list "I" when the vulnerability is uniquely exploitable by internal applications.
Infrastructure-based preventative measures
When securing the infrastructure, we focus first on encrypting the WebSphere Application Server traffic among the various components, as well as ensuring that the WebSphere Application Server administrative function is secured. Before going into details, it is useful to review a standard WebSphere Application Server topology and see all of the network links and protocols. As someone concerned about security, you need to know about all of these links and focus on securing those links. These links represent the coarsest grained system boundaries, we mentioned earlier. Refer to Figure 7.
Figure 7. Network link picture
The letters on the links in Figure 7 indicate the protocols used across those communication links. For each protocol, we list the usage and also provide some information on firewalls:
- H = HTTP traffic
- Usage: Browser to Web server, Web server to app server, and admin Web client
- Firewall friendly
- W = WebSphere Application Server internal
- Usage: Admin clients and WebSphere Application Server internal
server admin traffic. WebSphere Application Server internal
communication uses one of several protocols:
- RMI/IIOP or SOAP/HTTP: Admin client protocol is configurable.
- File transfer service (dmgr to node agent): Uses HTTP(S).
- DRS (memory to memory replication): Uses private protocol.
- Firewall friendly when using SOAP/HTTP
- Usage: Admin clients and WebSphere Application Server internal server admin traffic. WebSphere Application Server internal communication uses one of several protocols:
- I = RMI/IIOP communication
- Usage: EJB clients (standalone and Web container)
- Generally firewall hostile because of dynamic ports and embedded IP addresses (which can interfere with firewalls that perform Network Address Translation)
- M = MQ protocol
- Usage: MQ clients (true clients and application servers)
- Protocol: Proprietary
- Firewall feasible (there are a number of ports to consider). Refer to WebSphere MQ support pac MA86.
- L = LDAP communication
- Usage: WebSphere Application Server verification of user information in registry
- Protocol: TCP stream formatted as defined in LDAP RFC
- Firewall friendly
- J = JDBC database communication via vendor JDBC drivers
- Usage: Application JDBC access and WebSphere Application Server session database access
- Protocol: Network protocol is proprietary to each database.
- Firewall aspects depend on database (generally firewall friendly)
- S = SOAP
- Usage: SOAP clients
- Protocol: Generally SOAP/HTTP
- Firewall friendly when SOAP/HTTP
In the remainder of this section, we will discuss the steps required to secure the infrastructure.
1. Use HTTPS from the browser
If your site performs any authentication or has any activities that should be protected, use HTTPS from the browser to the Web server. If HTTPS is not used, information such as passwords, user activities, WebSphere Application Server session IDs, and LTPA security tokens can potentially be seen by intruders. (While these tokens can be safely transmitted over an unencrypted channel, for maximum security, it is best that they be protected.)
2. Put the Web server in the DMZ without WebSphere Application
One of the key principles of a DMZ (demilitarized zone) is to put as little function as possible in it to reduce the risks associated with an intruder breaking through the outer firewall. (In a typical DMZ configuration, there is an outer firewall, the DMZ network containing as little as possible, and an inner firewall protecting the production network.) Thus, it is normal to place the Web server in the DMZ and the WebSphere application servers inside the inner firewall. This is ideal, as the Web server machine then can have a very simple configuration and require very little software. Additionally, the only port that must be opened on the inner firewall is the HTTP(S) port for the target application servers. These steps make the DMZ a very hostile place for an attacker. If you place WebSphere Application Server on a machine in the DMZ, far more software must be installed on those machines (for example, a JDK, X libraries, etc.), and more ports must be opened on the inner firewall so WebSphere Application Server can access the production network. This largely undermines the value of the DMZ.
3. Separate your production network from your intranet
Most organizations today understand the value of a DMZ that separates the outsiders on the Internet from an intranet. However, far too many organizations fail to realize that many intruders are on the inside. For a large corporation, there are literally thousands of people, many of which are not employees but have access to the internal network. All people are possible intruders and, since they are on the inside, they have better access to the network. It can often be a simple matter of plugging a laptop into a network connection. Just as you protect yourself against the large untrusted Internet, you should also protect your production systems from a large and untrustworthy intranet. Separate your production networks from your internal network using firewalls. These firewalls, while likely more permissive than the Internet-facing firewalls, can still block numerous forms of attack. By applying this and the previous steps you should end up with a firewall topology like the one shown in Figure 8.
Figure 8. Recommended firewall configuration
4. Enable global security
By default, WebSphere Application Server uses no security. This means that all network links are insecure and that any user with access to the deployment manager (HTTP to the Web admin console, or SOAP/IIOP to the JMX management ports) can use the WebSphere Application Server administrative tools to perform any administrative operation, up to and including removing existing servers. Needless to say, this presents a great security risk.
Therefore, at a minimum, you should enable WebSphere Application Server security in a production environment to prevent these trivial forms of attack. Once WebSphere Application Server security is enabled globally, the WebSphere Application Server internal links between the deployment manager and the application servers and the traffic from the administrative clients (Web and command line) to the deployment manager are encrypted and authenticated (refer to Figure 7). Among other things, this means that administrators will be required to authenticate when running the administrative tools.
Be aware that enabling global security does not encrypt all network links, but rather a number of key internal links. We will discuss securing additional network links later . We will also discuss securing applications by leveraging WebSphere Application Server security now that it has been enabled.
5. Change the default key file
As stated earlier, enabling WebSphere Application Server security causes most internal traffic to use SSL to protect it from various forms of network attack. However, in order to establish an SSL connection, the server must posses a certificate and corresponding private key. To simplify the initial installation process, WebSphere Application Server is delivered with a sample key file containing a sample private key. This "private" key is included in every copy of WebSphere Application Server sold; as such, it is not very private. The name of the key file, DummyServerKeyFile, makes this clear.
To protect your environment, you should create your own private key and certificate for WebSphere Application Server communication. All of this is done using the iKeyman tool. Refer to the IBM WebSphere Security Handbook and the WebSphere Application Server Information Center in Resources for details on how to do this.
While we are on certificates, we will digress briefly on a crucial point: certificates expire. Should the WebSphere Application Server certificates expire, WebSphere Application Server will stop working. No communication will be possible. Thus, when you create new certificates as we recommend, make sure that you mark the expiration date on a calendar. If you use the default keys (against our advice), remember that they expire as well. You must actively plan for certificate expiration and obtain or generate new certificates prior to their expiration.
As Figure 7 shows, a typical WebSphere Application Server configuration has a number of network links. It is important to protect traffic on each of those links as much as possible to stop intruders. As should be apparent, various pieces of confidential information may be transmitted by an application, and it should be protected. We have already discussed the securing of many of the links by enabling WebSphere Application Server security, as well as securing the Web browser to Web server link. We will now discuss the remaining links.
6. Use SSL for Web server to WebSphere Application Server HTTP link
The WebSphere Application Server Web server plug-in forwards requests from the Web server to the target WebSphere application server. With WebSphere Application Server V5.0, if the traffic to the Web server is over HTTPS, then the plug-in will automatically use HTTPS when forwarding the request to an application server, thus protecting its confidentiality.
Further, with some care, you can configure the WebSphere application server (which contains a small embedded HTTP listener) to only accept requests from known Web servers. This prevents various sneak attacks that bypass any security that might be in front of or in the Web server. To do this, you configure the application server Web container SSL configuration to use client authentication. Once you have ensured that client authentication is in use, you need to ensure that only trusted Web servers can contact the Web container by limiting the parties that have access to the appropriate keys:
- Using iKeyman, create two keyrings, one for the Web container and one for the Web server plug-in.
- Delete all of the existing signing certificates from each keyring. At this point, neither keyring can be used to validate any certificates. This is intentional.
- In each keyring, create a self-signed certificate and export just the certificate (not the private key).
- Import the certificate (That was exported from the other keyring) into each keyring. Now each keyring contains only a single signing certificate. This means that each keyring can now be used to verify exactly one certificate, the self-signed certificate created for the peer.
- Install the newly created keyrings into the Web container and Web server plug-in.
7. Encrypt WebSphere Application Server to LDAP link
When WebSphere Application Server is using an LDAP registry, WebSphere Application Server verifies a user's password using the standard ldap_bind. This requires that WebSphere Application Server send the user's password to the LDAP server. If that request is not protected, a hacker could use a network sniffer to steal the passwords of users authenticating to WebSphere Application Server. Most LDAP directories support LDAP over SSL, and WebSphere Application Server can be configured to use this. If you use a custom registry, you will obviously want to secure this traffic using whatever mechanism is available.
8. Protect WebSphere Application Server to database link
Just as with any other network link, confidential information may be written to or read from the database. Although most databases support some form of authentication, not all support encrypting JDBC traffic (refer to your database vendor's documentation) between the client (in this case, WebSphere Application Server applications) and the database. You must recognize this weakness and take appropriate steps. Some form of network-level encryption, such as a Virtual Private Network (VPN), perhaps using IP Security Protocol (IPSEC), is the most obvious solution, although there are other reasonable choices. If you can place your database near WebSphere Application Server (in the network sense), various forms of firewalls and simple routing tricks can greatly limit access to the network traffic going to the database. The key here is to identify this risk and then address it as appropriate.
9. Encrypt Distributed Replication Service (DRS) network links
Do not forget to turn on DRS encryption as a separate manual step. DRS traffic is not encrypted by default, even when global security is enabled.
10. Configure and use Trust Association Interceptors
TAIs are often used to enable WebSphere Application Server to recognize existing authentication information from a Web SSO proxy server, such as Tivoli Access Manager (TAM). Generally, this is fine. However, be careful when developing, selecting, and configuring TAIs. A TAI extends the WebSphere Application Server trust domain. WebSphere Application Server is now trusting the TAI and whatever the TAI trusts. If the TAI is improperly developed or configured, it is possible to completely compromise the security of WebSphere Application Server. For example, IBM provides a secure TAI for integrating TAM with WebSphere Application Server. When configured properly, this is a highly secure setup. However, there is a property known as com.ibm.Websphere.security.WebSEAL.mutualSSL that indicates to the TAM TAI that the link from the Web server to the application server is securely authenticated, as described earlier. When used properly, that's fine. However, if you set this property to true but do not ensure the Web server to application server link is secure, then you have opened WebSphere Application Server up to trivial forms of attack, since the TAI does not validate the connection.
If you custom-develop a TAI, be sure that the TAI carefully validates the parameters passed in the request and that the validation is done in a secure way. We have seen TAIs that perform foolish things such as verifying the IP address in the HTTP headers, which is useless since HTTP headers can be forged.
11. Create separate administrative user IDs
When WebSphere Application Server security is configured, a single security ID is initially configured as the Security Server ID. This ID is effectively the equivalent of root in WebSphere Application Server and can perform any WebSphere Application Server administrative operation. Because of the importance of this ID, it is best to not widely share the password.
As with most systems, WebSphere Application Server does allow multiple principals to act as administrators. Simply use the WebSphere Application Server administrative application and go to the System Administration/Console Users (or Groups) section to specify additional users (or groups) that should be granted administrative authority. When you do this, each individual person can authenticate as himself or herself when administering WebSphere Application Server. (All administrators have the same authority throughout the cell. WebSphere Application Server does not support instance-based administrative authorization.) As of WebSphere Application Server V5.0.2, all administrative actions that result in changes to the configuration of WebSphere Application Server are audited by the deployment manager, including the identity of the principal that made the change. Obviously, these audit records are more useful if each administrator has a separate identity. Audit records are treated as serious messages and sent, by default, to SystemOut.log from the deployment manager.
Giving individual administrators their own separate administrative access can be particularly handy in an environment where central administrators administer multiple WebSphere Application Server cells. You can configure all of these WebSphere Application Server cells to share a common registry, and thus the administrators can use the same ID and password to administer each cell, while each cell has its own local "root" ID and password.
12. Take advantage of administrative roles
WebSphere Application Server V5 allows for four administrative roles:
These roles make it possible to give individuals (and automated systems) access appropriate to their level of need. The most interesting role is the Monitor role. By giving a user or system this access level, you are giving only the ability to monitor the system state. The state cannot be changed, nor can the configuration be altered. Take advantage of those roles whenever possible. For example, if you develop monitoring scripts that check for system health and have to store the userid and password locally with the script, use an ID with the monitor role. Even if the ID is compromised, little serious harm can result.
13. Do not run samples in production
WebSphere Application Server ships with several excellent examples to demonstrate various parts of WebSphere Application Server. These samples are not intended for use in a production environment. Do not run them there, as they create significant security risks. In particular, the showCfg and snoop Servlets can provide an outsider with tremendous amounts of information about your system. This is precisely the type of information you do not want to give a potential intruder. This is easily addressed by not running server1 (that contains the samples) in production. If you are using WebSphere Application Server Base, you will actually want to remove the examples from server1.
14. Enable Java 2 security
At this point, we have done a pretty good job of protecting the WebSphere Application Server infrastructure from external attacks. We are now encrypting network traffic and thus preventing snooping and traffic alteration. We are also authorizing all administrative traffic, which will prevent external intruders from damaging the infrastructure. However, the infrastructure is still fairly vulnerable to attack from applications within the cell. As discussed earlier, all application servers in WebSphere Application Server V5 contain the WebSphere Application Server administrative infrastructure and, therefore, the APIs for performing most administrative operations. An application programmer that learns the APIs can thus write an application that can call any of these APIs and potentially cause serious problems.
WebSphere Application Server V5 includes supports for Java 2 security as provided by the standard JDK. IBM has enhanced the Java 2 support to enforce the J2EE specifications, as well as to protect the WebSphere Application Server internal APIs from unauthorized access. Simply by enabling Java 2 security, these rules are automatically enforced. With Java 2 security enableed, substantial additional protections are added to the run time to prevent illegal application access. We will discuss Java 2 security restrictions more a bit later.
Be aware that Java 2 security is not a panacea. Neither WebSphere Application Server nor any J2EE application server to our knowledge is built as a hardened, compartmentalized security system. Thus, while Java 2 security can greatly strengthen the security of WebSphere Application Server, do not assume that this will provide complete isolation and protection of applications. If you cannot trust the code you are running on the WebSphere application servers, you should be very cautious.
Concerns about "rogue" programmers are better addressed via secure configuration management systems that track every code change and rigorous code inspections to validate that code development meets your security guidelines.
15. Choose appropriate WebSphere Application Server process
The WebSphere Application Server processes run on an operating system and must therefore run under some operating system identity. There are three ways to run WebSphere Application Server with respect to operating system identities:
- Run everything as root.
- Run everything as a single user identity, such as "was."
- Run the node agents as root and individual application servers under their own identities.
IBM tests for and fully supports the first two approaches. The third approach may seem tempting, since you can then leverage operating system permissions, but it is not very effective in practice for the following reasons:
- It is very difficult to configure, and there are no documented procedures. Many WebSphere Application Server processes need read access to numerous files and write access to the log and transaction directories.
- By running the node agent as root, you effectively give the WebSphere Application Server administrator root authority.
- The primary value of this approach is to control file system access. This can be achieved just as well using Java 2 permissions.
- This approach creates the false impression that applications are isolated from each other. They are not. The WebSphere Application Server internal security model is based on J2EE and Java 2 security and is unaffected by operating system permissions. Thus, if you choose this approach to protect yourself from "rogue" applications, your approach is misguided.
The first approach is obviously undesirable since as a general best practice it is best to avoid running any process as root if it can be avoided. This leaves the second approach, which is fully supported, easy to implement, and provides good security when used in conjunction with Java 2 security. We therefore recommend that approach.
Obviously, once you have chosen a WebSphere Application Server process identity, you should limit file system access to WebSphere Application Server's files by leveraging operating system file permissions. WebSphere Application Server, like any complex system, uses and maintains a great deal of sensitive information. In general, no one should have read or write access to most of the WebSphere Application Server information. (Do not take this too far. We have seen far too many cases where during development, developers are not allowed to even see the WebSphere Application Server log files. Such paranoia is unwarranted. During development, maximum security is not productive. During production, you should lock down WebSphere Application Server as much as possible. During development, be more lenient.) In particular, the WebSphere Application Server configuration files (<root>/config) contain configuration information as well as passwords.
16. Protect private keys
WebSphere Application Server maintains several sets of private keys. The two most important examples include the primary keystore for internal communication and the keystore used for communication between the Web server and the application server. These private keys should be kept private and not shared. Since they are stored on computer file systems, those file systems must be carefully protected, as mentioned above. However, also be careful to avoid incidental sharing. For example, do not use the same keys in production as in other environments. Many people will have access to development and test machines and their private keys. Guard the production keys carefully.
17. Never set Web server doc root to WAR
WAR files contain application code and lots of sensitive information. Only some of that information is Web-servable content, and so it is inappropriate to set the Web server document root to the WAR root. If you do this, the Web server will serve up all the content of the WAR without interpretation. This will result in code, raw JSPs™, and more, being served up to end users.
18. Certificates are not a panacea
When using client certificate authentication, realize that the Web server is now part of your trust domain. Compromise of the Web server will compromise WebSphere Application Server security completely. Further, when you are using client certificate authentication, since WebSphere Application Server now completely trusts the Web server, you should configure authenticated HTTPS between the Web server and the application server. Otherwise it is possible to spoof users by bypassing the Web server.
19. Keep up to date with patches and fixes
As with any complex product, IBM occasionally finds and fixes security bugs in WebSphere Application Server, IBM HTTP Server, and other products. It is crucial that you keep up to date on these fixes. At a minimum, you should make an effort to use recent PTF levels that usually include all of the recent security fixes. In addition, it is advisable that you subscribe to support bulletins for the products you use. Those bulletins often contain notices for recently discovered security bugs and the fixes. You can be certain that potential intruders learn of those security holes quickly. The sooner you act, the better.
Application-based preventative measures: Configuration
So far, we have focused on the basic steps that a WebSphere Application Server architect and the administration team can take to ensure that they create a secure WebSphere Application Server infrastructure. That is obviously an important step, but it is not sufficient. Now that the infrastructure has been configured to be secure, we must examine things that applications need to do in order to be secure. Obviously, applications will need to take advantage of the infrastructure provided by WebSphere Application Server, but there are also numerous other actions that application developers need to take. Many of those issues are detailed next.
20. Carefully verify that every Servlet alias is secure
WebSphere Application Server secures Servlets by URL. Each URL that is to be secured must be specified in the
describing the application. If a Servlet has more than one alias (that is,
multiple URLs access the same servlet class) or there are many Servlets,
it is easy to accidentally forget to secure an alias. Be cautious. Since
WebSphere Application Server secures URLs, and not the underlying classes,
if just one servlet URL is insecure, an intruder might be able to bypass
your security. In order to alleviate this, use wildcards whenever possible
to secure Servlets. If that is not appropriate, carefully double-check
web.xml file before deployment.
The alias problem is further aggravated by the feature known as "serve servlets by classname", which brings us to our next recommendation.
21. Do not serve Servlets by classname
Servlets can be served by classname or via a normal URL alias. Normally, applications choose the latter. That is, developers define a precise mapping from each URL to each Servlet class in the
web.xml file by hand, or using one of the various WebSphere
Application Server development tools.
However, WebSphere Application Server also lets you serve Servlets by
classname. Instead of defining a mapping for each Servlet, a single
generic URL (such as
/servlet) serves all Servlets. The
component of the path after the base is assumed by WebSphere Application
Server to be the classname for the Servlet. For example,
/servlet/com.ibm.sample.MyServlet refers to the Servlet class
Serving Servlets by classname is accomplished by setting the
serveServletsByClassnameEnabled property to true in the
ibm-web-ext.xmi file or by using the ASTK and checking
serve servlets by classname in the WAR editor. Do not
enable this WebSphere Application Server feature. This feature makes it
possible for anyone that knows the name of any Servlet to invoke it
directly. Even if your Servlet URLs are secured, an attacker may be able
to bypass the normal WebSphere Application Server URL-based security.
Further, depending on the classloader structure, an attacker may be able
to invoke Servlets outside of your Web application.
22. Do not place sensitive information in WAR root
WAR files contain Web-servable content. The WebSphere Application Server Web container will serve HTML and JSP files found in the root of the WAR file. This is fine as long as you place only servable content in the root. Thus, you should never place content that should not be shown to end users in the root of the WAR. For example, do not put property files, class files, or other important information there. If you must place information in the WAR, place it within the WEB-INF directory, as allowed for in the Servlet specification. Information there is never served by the Web container.
23. Consider disabling file serving and directory browsing
You can further limit the risk of inappropriate serving of content by disabling file serving and directory browsing. Obviously, if the WAR contains servable static content, file serving will have to be enabled.
24. Use container managed aliases on J2EE resources
Any J2EE application that runs within the cell can access any J2EE resource. This is because the resources have JNDI names that can be looked up by any application. There is no authorization on resource access. Thus, if application A uses an enterprise database, simply by defining the database as a data source, it is possible that application B in the same cell can access this database.
When an application tries to access a resource by calling getConnection() on the resource factory (such as, a data source or a JMS connection factory), WebSphere Application Server will automatically provide authentication information to the underlying resource if it is available. The decision of what authentication information to provide depends upon the authentication mode and the available J2C authentication aliases. The details are quite complex, but, in brief, any application can look up any resource in the JNDI namespace. When this is done, the authentication mode of "application" is used implicitly. This in turn means that WebSphere Application Server will use a component authentication alias if one is available. Thus, any resource defined with a component alias is accessible to any application in the cell.
On the other hand, if only a container alias is defined on a resource, then a rogue application will not be able to access the resource since they can only steal access to resources via global JNDI access, which always uses component aliases.
If you choose to use this approach, define all resources with container managed aliases, then require that applications use local references to access the resource and specify container managed authentication on the reference as part of the development process. Some pictures may help to make this clear. Figure 9 shows the WebSphere Studio reference editor in which we are specifying container managed authentication on a database reference.
Figure 9. Database resource reference using container managed authentication
25. Do not define a default userid and password on a data
A corollary of the previous item is that you should not define a default userid and password on a data source. If you do so, then any application within the cell can look up the resource and then implicitly use the provided userid and password. Instead, always specify only container managed aliases.
26. Be cautious with J2C resources
The careful reader may realize that some resources do not support container managed aliases. Perhaps they only support a default userid and password being provided in the resource definition. If this is the case, use great caution and, if at all possible, do not supply the default userid and password. In many cases, there will be other programmatic ways of providing authentication information.
27. Configure Java 2 security properly
As mentioned earlier, Java 2 security provides a powerful way to limit applications and prevent many forms of illegal access. In addition to preventing illegal access to WebSphere Application Server APIs, Java 2 security also limits file system access, which is crucial in a shared environment.
WebSphere Application Server limits applications to a very small set of "safe" permissions by default. If an application needs more permissions, it must define those requested permissions in the was.policy file contained with the EAR. When the application is deployed, WebSphere Application Server will read the was.policy file and add those permissions to the standard set. As should be obvious, this is a potential security hole. Fortunately, the WebSphere Application Server admin tools warn the administrator when applications request additional permissions. Our advice: carefully review the requested permissions. If any are unexpected (the expected set should be in a carefully reviewed delivery document), reject the application. There should be a formal process that includes a security review that determines what permissions an application will be allowed.
The process of review and verification on install can be tedious, and there is no easy way to avoid this. However, for a large set of environments, most applications will need a common set of additional permissions. If this is possible, the infrastructure team can place in the app.policy file the default permissions for all applications on that node. Then only applications that need unusual permissions will require manual verification.
Application-based preventative measures: Design and implementation
Here, we turn our attention to the actions that application developers and designers must take in order to build a secure application. These steps are crucial and, sadly, often overlooked.
28. Use WebSphere Application Server security to secure
Usually, application teams recognize that they need some amount of security in their application. This is often a business requirement. Unfortunately, many teams develop their own security infrastructure. While it is possible to do this well, it is very difficult, and most teams do not succeed. Instead, there is the illusion of strong security, but in fact the system security is quite weak. Security is simply a difficult and complex problem. There are subtle issues of cryptography, replay attacks, and various other forms of attack that are easily overlooked. The message here is that WebSphere Application Server security should be used unless it truly does not meet your needs. And this is rarely the case.
Perhaps the most common complaint about the J2EE-defined declarative security model is that it is not sufficiently granular. For example, you can only secure at the method level of an EJB or Servlet, not at the instance level. (In this context, a method on a Servlet is one of the HTTP methods: GET, POST, PUT, and so on.) For example, all bank accounts have the same security restrictions, but you would prefer that certain users have special permissions on their own accounts.
This problem is addressed by the J2EE security APIs (isCallerInRole and getCallerPrincipal). By using these APIs, applications can develop their own powerful and flexible authorization rules but still drive those rules from information that is known to be accurate[md] security attributes from the WebSphere Application Server runtime.
An example of weak security:
Applications that do not use WebSphere Application Server security tend to create their own security tokens and pass them within the application. These tokens typically contain the user's name and some security attributes, such as their group memberships. It is common for these security tokens to have no cryptographically verifiable information. The presumption is that security decisions can be made based on the information in these tokens. This is false. The tokens simply assert user privileges. The problem here is that any Java program can forge one of these security objects and then possibly sneak into the system through a back door. The best example of this is when the application creates these tokens in the Servlet layer and then passes them to an EJB layer. If the EJB layer is not secured (see the next section), intruders can call an EJB directly with forged credentials, rendering the application's security meaningless. Thus, without substantial engineering efforts, the only reliable secure source of user information is the WebSphere Application Server infrastructure.
29. Secure every layer of the application (particularly
All too often, Web applications are deployed with some degree of security (home-grown or WebSphere Application Server-based) at the Servlet layer, but the other layers that are part of the application are left unsecured. This is done under the false assumption that only Servlets need to be secured in the application since they are the front door to the application. But, as any police officer will tell you, you have to lock the back door and windows to your home as well. There are many ways for this to occur, but this is most commonly seen when EJB components are used as part of a multi-tier architecture when Java clients are not part of the application. In this case, developers often assume that the EJB components do not need to be secured since they are not "user-accessible" in their application design, but this assumption is dangerously wrong. An intruder can bypass the Servlet interfaces, go directly to the EJB layer, and wreak havoc if you have no security enforcement at that layer. This is easy to do with available Java IDEs that can inspect running EJBs, obtain their metadata, and dynamically create test clients. WebSphere Studio is capable of this, and developers see this every day when they use the integrated test client.
Often, the first reaction to this problem is to secure the EJB components via some trivial means, perhaps by marking them accessible to all authenticated users. But, depending on the registry, "all authenticated users" might be every employee in a company. Some take this a step further and restrict access to members of a certain group that means roughly "anyone that can access this application." That is better, but usually not sufficient, as everyone that can access the application should not necessarily be able to perform all the operations in the application. Refer again to the previous section to see how this can be addressed.
30. Do not rely on HTTP session IDs for security
Unfortunately, some applications that use their own security track the user's authentication session through the use of the WebSphere Application Server HTTP Session. This is dangerous. The WebSphere Application Server session is tracked via a session ID (on the URL or in a cookie). While the ID is cryptographically generated, it is still subject to replay attacks, does not timeout (except when idle), and can be stolen via network sniffing attacks. The WebSphere Application Server LTPA token, which is created when WebSphere Application Server security is used by the application, is designed to make these types of attack much more difficult. In particular, LTPA tokens have limited lifetimes and use strong encryption, and the WebSphere Application Server security subsystem audits the receipt of invalid LTPA tokens.
In any case, if HTTP Sessions are used for tracking users, all traffic should be sent over HTTPS to prevent network sniffing.
31. Prevent cross-site scripting
So far, this does not seem terribly dangerous, but intruders take this one step further. They trick a user into going to a Web site and entering data into the "evil script", perhaps by sending the user an innocent URL in an email. Now the intruder can use the user's identity to do harm. (See Understanding Malicious Content Mitigation for Web Developers.)
This problem is actually a special case of a much larger class of problems related to user input validation. Whenever you enable a user to enter free-form text, you must ensure that the text does not contain special characters that could cause harm. For example, if a user were to type in a string that is used to search some index, it may be important to filter the string for improper wildcard characters that might cause unbounded searches. In the case of cross-site scripting prevention, you need to filter out the escape characters for the scripting language.
32. Store information securely
To create a secure system, you must consider where information is stored or displayed. Sometimes, fairly serious security leaks can be introduced by accident. For example, be cautious about storing highly confidential information in the HTTP Session object, as this object is serialized to the database and, thus, that information could be read from there. If an intruder has access to your database or even raw machine-level access to the database volumes, he or she might be able to see information in the session. Needless to say, such an attack would take a high degree of skill. Unfortunately, the next two attacks are not nearly as difficult.
An even subtler problem occurs with stateful session beans. These beans are serialized to the file system by WebSphere Application Server when memory is short. Here again, confidential information could be inadvertently written to disk.
Logging is probably the most dangerous area of security compromise. Developers often put highly informative messages in log files to aid with debugging. Unfortunately, sometimes this information is confidential. Remember, you never know who will have to look at a log file. Some applications have been known to log social security numbers to files. Needless to say, this raises all sorts of disturbing possibilities. The key message here is to carefully review all logs and other forms of output for security-sensitive information.
33. Invalidate idle users
Invalidate user HTTP Sessions and authentication sessions as soon as you are done with them. Doing so reduces the possibility of idle sessions being hijacked by another user. It also frees up resources within WebSphere Application Server for other work.
HTTP Sessions are destroyed by using the HTTPSession.invalidate() method.
WebSphere Application Server authentication information is destroyed by
directing the user to the
ibm_security_logout URL. (Refer to
the WebSphere Application Server Information Center for details.) It
is worth noting that the most reliable way to eliminate session cookies is
to simply exit the Web browser. Many Web sites now recommend this
explicitly; perhaps yours should as well.
So far, we have discussed the numerous steps required for hardening a WebSphere Application Server environment. As should be obvious, there are many difficult and complex steps that must be taken to ensure that your systems are secure. When you are building complex J2EE systems, ensure that sufficient time is allowed for security in the schedule. Adding it after the fact is unlikely to work, which brings us to our final topic.
Like the rest of WebSphere Application Server, the security subsystem is
fully traced. To trace WebSphere Application Server security, enable
tracing of these packages:
Generally speaking, unless the security problem you are experiencing is related to server startup, it is best to start an application server, let it stabilize, and then enable dynamic tracing of security. Then run your test and observe the trace output.
It is also often useful to determine how WebSphere Application Server is
managing the LTPA cookie or token. You should enable your Web browser to
warn me about cookies. Once you do this, your browser will inform you when
WebSphere Application Server sends back an LTPA token. If you do not get
such a warning after a seemingly successful authentication, the browser or
some intermediate proxy has probably swallowed the cookie. It could be
that the DNS domain setting in the LTPA SSO page is wrong or that a proxy
server is configured improperly. It is often helpful to turn on the Web
server plug-in tracing and possibly the WebSphere Application Server Web
container tracing (
com.ibm.ws.Webcontainer.*) to see if the
LTPA token is being generated by WebSphere Application Server and then
This article has covered much ground. We have discussed numerous aspects of security, although we have focused on the core theme of hardening a WebSphere Application Server environment. Hopefully you now have the basic information you need to secure your J2EE systems.
I would like to thank my colleagues Tom Alcott and Bill Hines for their valuable input. I would also like to thank Ching-Yun (C.Y.) Chao and Peter Birk, members of the WebSphere Application Server security development team.
- Enterprise Application Security -- Keys Botzum, 2000
- CIO Insight: What is social engineering? -- Computer Cops, 2003
- WebSphere Application Server Information Center
- IBM WebSphere V5.0 Security: WebSphere Handbook Series, SG24-6573 -- IBM Corp., 2004
- Understanding and Deploying LDAP Directory Services, Howes, et al, ISBN 0672323168
- Buffer Overflows -- What Are They and What Can I Do About Them? -- Larry Rodgers, CERT, 2001
- Understanding Malicious Content Mitigation for Web Developers
- Java Security Coding Guidelines
- Browse for books on these and other technical topics.