IBM WebSphere Developer Technical Journal: Handling Static Content in WebSphere Application Server

This article evaluates several different scenarios for deploying static and dynamic content to a Web server and an application server, such as WebSphere Application Server Advanced Edition 4.0.

Kyle Brown, Senior Technical Staff Member, EMC

Kyle provides consulting services, education, and mentoring on object-oriented topics and Java 2 Enterprise Edition (J2EE) technologies to Fortune 500 clients. He is a co-author of Enterprise Java Programming with IBM WebSphere, the WebSphere AEs 4.0 Workbook for Enterprise Java Beans, 3rd Edition, and The Design Patterns Smalltalk Companion. He is also a frequent conference speaker on the topics of Enterprise Java, OO design, and design patterns. You can reach him at brownkyl@us.ibm.com.


developerWorks Professional author
        level

Bill Hines, Senior Consultant , EMC

Bill has over 20 years of I/T experience, particularly in software development in many languages and on many platforms. For the past five years he has been specializing in server-side Java, and in advanced configuration, architecture, performance tuning, administration, troubleshooting, development, and security for the IBM WebSphere platform.


developerWorks Contributing author
        level

Keys Botzum, Senior Consultant, EMC

Keys has over 10 years of experience in large scale distributed system design and specializes in security. He has worked with a variety of distributed technologies, including Sun RPC, DCE, CORBA, AFS, and DFS. Recently, he has been focusing on J2EE and related technologies. He holds a Master's degree in Computer Science from Stanford University and a B.S. in Applied Mathematics/Computer Science from Carnegie Mellon University.


developerWorks Professional author
        level

20 November 2002

© Copyright International Business Machines Corporation 2002. All rights reserved.

Introduction

You just returned from Bulk-Stuff-R-Us with your brand new, extra large 25-cubic foot freezer. Do you then empty the contents of your refrigerator freezer in the upstairs kitchen and keep everything in the new basement freezer? Certainly not -- it wouldn't be economical. In this case, the storage and cooling capacity of the empty upstairs unit would be wasted, and it would also involve an unnecessarily long trip down the stairs every time you want to enjoy a late night snack of frozen pizza. As an IBM ® WebSphere® Application Server (hereafter called Application Server) administrator, you might be doing exactly this: are you keeping your Web server upstairs in the "demilitarized" zone and Application Server in the basement "trusted" zone?


The problem

WebSphere administrators are often concerned with how to make the Application Server environment perform to its fullest extent. I/T management is often concerned with how to leverage their investment in hardware and software, and how to save costs. However, taking the easiest path to configure the site might not meet those goals. The administrator who is new to Application Server might take the simplest approach and follow the spirit of J2EETM by deploying the entire application on to Application Server. Using this approach, both run-time components (servlets, JSPs, and EJBs) and static pieces (HTML files and graphic images) of the application are deployed to Application Server. A more seasoned administrator realizes that these static components are served more efficiently by other pieces of the topology (as they were before the advent of "application servers"). The administrator then begins to ask, "Where should the static content really go?"

The packaging structure for J2EE answers this question for all WebSphere applications. In the Servlet 2.2 specification, SunTM introduced the Web Archive (WAR) file. A WAR file is a file in ZIP format that contains all of the compiled Java code comprising a logical Web application. According to the servlet specification, a Web application can contain servlets, JSPs, utility classes, static documents, client side applets, beans and classes, and descriptive meta-data that "ties all of the above elements together" (Sun, page 43, primarily meaning the web.xmlfile). Here's a hierarchy example:

Figure 1. Hierarchy example
Hierarchy example

WebSphere Application Server 4.0 and 5.0 require that all Web applications are packaged as WAR files to be deployed onto an application server; WebSphere Studio facilitates the packaging of entire Web applications into WAR files. A WAR file can include not only Java elements for your servlets and JSPs, but also static content (HTML pages, GIF, and JPEG files). Tools such as WebSphere Studio make this easy because they configure the links between your static and dynamic content, validate links, and help identify and fix broken references.

However, even though the WAR file packaging structure is required by WebSphere Application Server, do not assume that all sites should answer the question ("Where should you place your static content?") with "Inside the WAR files that are going to be deployed to WebSphere Application Server." First, you should answer the question, "Where can users get their static content from?"

There are at least three ways in which static content is returned to a user. They are:

  1. Static content from the WAR file through the File Serving feature in WebSphere Application Server (the default case)
  2. Static content from a Web server
  3. Static content from a Caching Proxy Server such as the one in WebSphere Edge Server (hereafter called Edge Server) product

Assume that the Web server and Application Server are deployed on different physical machines. Where do the individual files go, and what is the best approach to simultaneously deploy the static content and the dynamic content (servlets and JSPs)?

Let's consider three progressively complex scenarios drawn from the earlier discussion, and then consider the appropriate solutions.

Scenario 1: Serving content through the File Serving servlet

Figure 2 shows the recommended topology for using WebSphere Application Server in a scalable way. Most of our customers implement this scenario. Application servers and Web servers are separated onto different physical machines.

Figure 2. Separated Web Servers and Application Servers
Separated Web servers and application servers

Not shown on this diagram (but implied) are one or more load balancing routers such as the Network Dispatcher component of WebSphere Edge Server, or Cisco's® Local Director that balances requests across a set of Web server machines. This configuration has several advantages:

  1. This configuration allows for failover. WebSphere Application Server supports this configuration through its Web server plug-in, which allows a single Web server to support multiple different application servers, and also to support multiple "clones" of a single application server. The Web server plug-in examines each incoming HTTP request and determines which application server instance it should forward the HTTP request to, based on the URL and the information in the plug-in configuration (plugin-cfg.xml) file.
  2. This configuration lets you insert a firewall between the Web server and Application Server, making the application server logic and data more secure. In this configuration, the Web servers reside in a "demilitarized zone" or DMZ.

If you are building a system using this topology, and static files are part of a logical Web application, how are they returned to the client? The answer is through the default file serving behavior of WebSphere Application Server. When a request is made for a file that is contained within a WAR file (for example, its URL is contained within the context root of a Web application), a special "hidden" servlet called the file serving enabler is invoked. This servlet fetches the corresponding file from the appropriate directory within the WAR file and returns it as the response to the request.

Enable this function by selecting the File serving enabled check box in the IBM extensions tab of the Web module properties in the Application Assembly Tool. This check box is selected by default. The corresponding entry fileServingEnabled="true" is located in the ibm-web-ext.xmi file in the WEB-INF extension under the WAR file. By observing the corresponding entry in the plugin-cfg.xml file, you see that every request under the context root is designated to be served by Application Server:

<UriGroup Name="Weather/WeatherProj_URIs"> 
        <Uri Name="/WeatherProj/*"/> 
</UriGroup>

The advantage of using this approach is its simplicity. The files are only kept on the Application Server file system so you do not need to keep them on the Web server. However, this simplicity comes at a significant performance cost. There are additional network hops needed for all files that are served by Application Server, as shown in Figure 2. Also, serving files this way requires additional processing by Application Server, which diminishes Application Server's ability to process more heavyweight business logic and to handle more transactions.

Scenario 2: Dividing files between the Web server and Application Server

If you disable the file serving feature of Application Server, then only JSP and servlet URLs are served by Application Server. When the plug-in files are regenerated, the entry in plugin-cfg.xml looks as follows:

<UriGroup Name="Weather/WeatherProj_URIs"> 
        <Uri Name="/WeatherProj/UpdateWeather"/> 
        <Uri Name="/WeatherProj/WeatherDisplay.jsp"/> 
        <Uri Name="/WeatherProj/DisplayWeather"/> 
        <Uri Name="/WeatherProj/*.jsp"/> 
        <Uri Name="/WeatherProj/*.jsv"/> 
        <Uri Name="/WeatherProj/*.jsw"/>  
        <Uri Name="/WeatherProj/j_security_check"/>  
</UriGroup>

Application Server has built the plug-in file intelligently on regeneration so that it lets the Web server serve static content. It passes dynamic URLs for servlets and JSPs back to Application Server. In this case, you must also configure the Web server to recognize the WeatherProj static URIs. This is done easily in the Apache/IBM HTTP Server using Alias directives such as:

Alias /WeatherProj/ "C:/IBM HTTP Server/htdocs/WeatherProj/" 
Alias / WeatherProj /images/ "C:/IBM HTTP Server/htdocs/ WeatherProj /images/"

Be sure to restart the Web server after this change. The problem that remains is how to get the static content from the WAR file out to the Web servers. Administrators want to do this with a repeatable process to prevent errors. Possibilities for this type of process are:

  1. A combination of operating system shell script and FTP commands to unzip (if the deployed EAR is not already exploded) and route the content to the document root of all Web servers. A build tool such as Ant is helpful by placing static content in its own ZIP file during the build process.
  2. Intelligent content management software that is already in use to route documents automatically around the network when they are updated.

Either approach works, but which one you should choose depends on your environment.

Advantages of not using the file serving servlet

The processing capability is split between the Web server and Application Server, so you can vary the amount of capacity allocated between the two based on the split of dynamic and static content in your site. If your site serves a lot of static content, this is useful from a licensing perspective. If they are performing the same task, it is more cost effective to add more Web servers than it is to add more application servers.

Disadvantages of not using the file serving servlet

The problem with this solution is that it becomes difficult to manage from a content management perspective. To deploy a new file, you need to know the target machine. Machines might also get out of sync with a JSP on an application server requesting a resource that is not on a Web server.

Scenario 3: Web server, Application Server, and Caching Proxy

In this scenario, we place a Caching Proxy Server in front of the Web server as shown in Figure 3. This diagram includes a load balancing solution as described above.

Figure 3. Caching Proxy Server
Figure 3. Caching Proxy Server

A caching proxy server (such as the Caching Proxy component of Edge Server) serves from a large in-memory or disk cache static content that has been previously requested through its URL. Some Web servers also provide this function in a more limited capacity, such as the Fast Response Cache Accelerator in the IBM HTTP Server.

When a user first requests a static piece of content (GIF, JPEG, HTML file), it is served up by one of the application servers (see the far right of the diagram). This content is cached in the proxy server. All successive requests to this particular URI (from any user) are handled by the proxy server. The content is retrieved from the cache and returned to the user. This continues until the cache entries are invalidated by a time-based or event-based algorithm.

This solution is significantly faster than going through both the Web server and Application Server for each piece of static content. It also allows for a more secure environment because you can place a firewall not only between the Web server and Application Server, but also in front of the IP sprayer (behind the Caching Proxy server). All files are protected behind two firewalls. In this way, there are no problems created by placing files either in the insecure zone or in the DMZ, where files could be modified to deface your Web site because the cached files are held only in memory in front of the firewalls.

The disadvantage of this solution is the added cost of purchasing the Caching Proxy servers and managing them. You can partially offset the cost by the reducing the amount of traffic to the back-end application servers, thereby reducing the overall load on these machines.


A sample performance cost of the different options

To test the above scenarios, we configured three 1-CPU UNIX servers (each with 512 MB of memory) as follows:

Server 1: This machine was configured with Edge Server, specifically the Caching Proxy component. We did not install the Network Dispatcher load balancing component because we did not test server groups.

Server 2: This machine was configured with the IBM HTTP Server to function as our separate Web server. The WebSphere Application Server plug-in is configured for interoperability with Server 3.

Server 3: This machine was configured with WebSphere Application Server Advanced Edition 4.01 and the IBM HTTP Server. The IBM HTTP Server is installed to test additional scenarios to show the performance variant between a co-located Web server and a separate Web server. A separate database server was used for the WebSphere Application Server administrative database, which was not considered significant for this testing. In addition, we installed the Pet Store J2EE sample application from Sun. This application is widely available and familiar to Java developers. Its pages contain static content of various pets in graphics format, as shown in Figure 3.

Figure 4. Java Pet Store
Java Pet Store

We ran our tests using an IBM internal load testing tool (AKStress/IBM Web Performance Tools) running on a single CPU Intel-based Windows® 2000 machine, an IBM Thinkpad T23 laptop. We ran with a simulated 50 browsers with four threads, each for a total of 500 page requests. These tests were rather limited in duration. They were set up primarily to measure serving static content rather than the typical transactional load testing that was conducted for longer a time frame. We performed five to 10 runs for each of the following scenarios:

  1. Hitting the IBM HTTP Server instance on Server 3 (with WebSphere Application Server on the same machine) with Application Server serving static content and with the file serving enabler on.
  2. Hitting the IBM HTTP Server instance on Server 3 (with Application Server on the machine), but with the file serving enabler turned off and IBM HTTP Server serving static content on.
  3. Hitting the IBM HTTP Server instance on Server 2, which serves static content and passes everything back to Application Server on Server 3.
  4. Edge Server on Server 1 caching static content to IBM HTTP Server on Server 2 to Application Server (file serving enabler on) on Server 3.
  5. Edge Server on Server 1 caching static content to IBM HTTP Server on Server 2 (serving static content) to Application Server (file serving enabler off) on Server 3.

A pictorial representation of the scenarios is shown in Figure 5.

Figure 5. Static content scenarios
Static content scenarios

Below are the test results from all five scenarios (also shown in Figure 5).

Test results

Time to complete (secs)Pages/secRequestsReq/sec
1. 156.53.244309.827.95
2. 99.45.034329.243.56
3. 935.484311.847.24
4. 94.755.294345.346.00
5. 83.46.004326.451.89
Figure 6. Test run results
Test run resutls

These test runs, while minimal, provide some useful data. First, having a separate Web server serving static content from a box separate from Application Server provides significant benefits. As the numbers show, there is a significant performance improvement. If IBM HTTP Server is on the same machine as Application Server or if Application Server serves static content, the Application Server box CPU utilization is much higher than the other scenarios. CPU utilization is lower and there are fewer errors during the test runs when Application Server did not serve static content. The Application Server machine is generally performing critical work, indicating that it is best to move static content serving off of the Application Server machine.

Another important observation is that adding the Edge Server caching proxy increased performance over comparable scenarios without a caching proxy. Therefore, Scenario 4 is dramatically faster than Scenario 1, while in both cases Application Server is serving dynamic content. These are not perfectly comparable because IBM HTTP Server is located on the same machine as Application Server in Scenario 1. Scenario 5 is also faster than Scenario 3. In both cases, IBM HTTP Server serves static content from a box separate from Application Server. In the later case, the results are less significant, but still notable.

Finally, Scenario 5 provides the best performance by combining the use of two key techniques: serving static content from a separate Web server and using a caching proxy server.

As stated earlier, these tests are made intentionally simple to demonstrate a simple point. Rather than relying on these numbers as absolute truth, you should plan on running more realistic tests in your own environment with a heavier and more sustained load. You should add server groups and load balancers in order to determine the best approach for your environment. Further, applications with large amounts of cacheable dynamic content might benefit from the use of dynamic caching. Dynamic caching is a feature that you can configure in Application Server 4.x alone, or in tandem with Edge Server's Caching Proxy.


Choosing the right option

Under what circumstances do you choose each of the previously described options? How can you balance the pros and cons of each solution? The following set of guidelines can help you make decisions about your static content:

  • If performance is not a problem in your WebSphere installation, don't worry about the more complex setups. It is easier and more cost effective to keep your static content inside the WAR files and then have the file serving servlet serve it.
  • If performance is a problem, then unzipping files improves the overall performance of your site. However, unpackaging and replacing the static content might affect productivity unless this process is made repeatable by the above mentioned techniques.
  • If you have performance needs, and can afford the expenditure, then the best solution over the long term is to use a caching proxy server. Products such as WebSphere Edge Server can provide additional performance benefits, including components to perform tasks such as load balancing, dynamic (JSP/servlet) caching, and content management.

No matter what you choose, be sure to rigorously test your specific application and environment to measure the differential gains between the different approaches and to determine which approach is correct for your environment.


What about browser caching?

Most browser users are aware that their browser caches static content. Some are aware that when accessing a page with that same static content, the Web server returns a "304-Not Changed" response instead of reserving the content. These are useful for future requests from the same users. The idea of caching on the server side is that after the first user accesses the content, subsequent users who are unlikely to have the pages in their browser cache can take advantage of the cached results at the server.

This results in less traffic to the back-end application servers. Also, browsers are often set by default to not cache static results returned from SSL (HTTPS) connections. You can change this setting for better browser performance.


Conclusion

This article evaluates several different scenarios for deploying static and dynamic content to a Web server and an application server. These tests are cursory and only for establishing a general feel for the impact of the different scenarios. Each development organization must evaluate the benefits and costs of each scenario by splitting static content out for itself. Results depend on how much static content your site contains, how much reserved horsepower you need from WebSphere Application Server, and other factors discussed above. Finally, you should apply the same principles to later versions of WebSphere Application Server (for example, version 5.0) and any J2EE application server.


Acknowledgments

The authors would like to thank Tom Alcott, Harvey Gunther, Ken Ueno, and Cathy Hickey for their comments and suggestions in preparing this article.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere
ArticleID=13794
ArticleTitle=IBM WebSphere Developer Technical Journal: Handling Static Content in WebSphere Application Server
publish-date=11202002