IBM WebSphere Developer Technical Journal: Static and dynamic caching in WebSphere Application Server V5

This book excerpt explains how different types of caching can be used to maximize the performance and minimize the workload of each layer of a distributed Web application in WebSphere® Application Server.

Bill Hines, Certified Consulting I/T Specialist, IBM Software Services for WebSphere

Bill Hines is a Certified Consulting I/T Specialist, working as a mobile consultant in IBM's Software Services for WebSphere group. He provides mentoring, skills transfer, development, troubleshooting, and performance tuning services to IBM customers. Previously, Bill ran his own independent consulting business, and served in key architectural and development positions with other large corporations



12 May 2004

This article is an excerpt from the new book IBM WebSphere: Deployment and Advanced Configuration, by Roland Barcia, Bill Hines, Tom Alcott, and Keys Botzum, to be published by IBM Press in August 2004.

Introduction

Caching is a well known technique for improving application performance. Developers have been hand-building caches for decades in numerous technologies. J2EE applications can benefit from caching just like any other application. WebSphere Application Server recognizes this by including a powerful caching mechanism product known as dynamic caching (dynacache).

When considering caching opportunities in a J2EE application, it is helpful to consider the basic structure of a typical application. J2EE application developers often build their applications to the Model-View-Controller (MVC) paradigm, where there are "layers" of code that handle the user interface (View), business logic (Controller), and data access (Model). Similarly, the physical topology of the hardware that serves the application is often split into tiers:

  • an Edge tier that might have load balancers and caching proxies.
  • a Web tier that would have Web servers.
  • an application tier that would have application servers such as WebSphere Application Server.
  • a data tier that might have database servers or enterprise information systems (EIS).

In these types of distributed architectures, caching can be used to maximize the performance and minimize the workload of each tier, resulting in substantial performance improvements.

Caching is generally spoken of in two groups: static caching, which means the caching of static, or unchanging content, such as HTML, graphics, and JavaScript files; and dynamic caching, which is the caching of run time program execution results. In this chapter, we will start by exploring a basic multi-tier application and discuss the various points at which caching can be applied. We will then move on to detailed discussions of the various caching options, and how to identify potential uses of caching in your application.


Caching opportunities

Figure 1, below, shows the path of a typical J2EE request/response without caching. In order to clearly show the benefit of caching and what is happening to a request/response with various caching configurations, we will use this diagram throughout this chapter. A dynamic request, such as one to a servlet or JSP, would likely traverse the entire path shown below each time it is executed.

Even a simple static request with file serving enabled would still continue through to the application server. Clearly, there are many network hops taking place to various servers and resources utilized on those servers in order to complete these requests. The network hops introduce latency into the system. More importantly, the processing within each tier has a real cost in time and resources. For example in the application server, there are many layers affected by each hop, such as the Web container, EJB container, and other services such as JDBC and JNDI. One can imagine that just to generate a single page (perhaps via a JSP), a servlet must be executed, EJBs called, backend databases accessed, and then the entire result rendered into a final page for presentation. It does not take much imagination to see that doing this repeatedly when the same results are returned, such as for each of hundreds of users to view the same product catalog page, results in extra work and resource usage that could potentially be avoided. Caching is a key technique for improving performance.

Figure 1 is a simplified diagram; typically there are many servers at each tier for failover and scalability. For example, the Web tier might consist of several Web servers being fronted by a set of load balancers feeding a set of proxy servers. We include a proxy server in this diagram in order to show the "worst-case" scenario, which includes a maximum number of hops, and to feed our later discussion; however, a minority of environments use proxy servers. It should be noted that if the proxy server is actually a caching proxy server it can dramatically improve performance rather than degrade performance, as we will see in our later discussion on the WebSphere Application Server Caching Proxy. There would also be redundancy at the application server and possibly user directory servers.

Figure 1. J2EE request/response with no caching
Figure 1. J2EE request/response with no caching

Caching implications on performance

In considering Figure 1 in a scenario where a product catalog view is requested by the user, it is not hard to imagine the sorts of resources that are invoked to satisfy the request. To build the list of products, complex SQL joins might be executed to pull in data from the manufacturing and marketing databases, as well as calls to ERP systems such as Siebel for customer pricing and information. Consider now the difference in performance if this request was resolved not by invoking and taxing all of these backend servers, but rather simply returned as HTML results from a front end cache on the proxy server. Additionally, consider now that all of these backend servers experience significantly less load when a majority of the requests are resolved from cache, as opposed to consuming their resources. The result is much more throughput, as many more users can be serviced by the existing hardware and software investment. It is not hard to see that caching is a major performance tuning knob. In some cases it can result in triple-digit performance improvements.

It has often been said that you cannot tune your way out of a badly written application. The impact of bad code on performance can be significantly reduced by available WebSphere Application Server performance optimizations such as tuning the queuing network of threads and connection pools, and sizing JVM heaps. Generally, the most effective way to improve application performance is to avoid performing any work at all. Dynamic caching can offer some relief in this situation, particularly to those applications that misbehave only under heavy load. Dynamic caching is supported directly by WebSphere Application Server, so applications can now use the IBM provided infrastructure, rather than going to great lengths to build their own "home-grown" caching subsystems (as well as security, persistence, and other types of system code). Generally speaking, applications should avoid developing custom caching code and instead focus on only the logic to solve the business problem at hand.


Caching static files

The simplest form of caching is the caching of static file content. In this section, we will explain how static files are served in WebSphere Application Server, as well as the opportunities for caching them.

Static file handling

To understand static file handling in WebSphere Application Server, let us set the stage by describing how things are laid out in a typical J2EE application. Recall that a J2EE application is packaged as a single file known as an EAR. The EAR in turn contains a variety of J2EE modules; in particular, a WAR file. The WAR file of course contains Web related components, including static elements such as HTML files and graphics, plus servlets, JSPs and other Java classes. In the most simple deployment scenario, the EAR file is deployed to an application server and the entire application is served from there. This includes the application's static content, which is handled by a WebSphere Application Server facility called the File Serving Enabler.

The File Serving Enabler is a component of the application server's Web container, which listens for HTTP requests that appear to be file requests. The File Serving Enabler acts as a Web server and serves up these static files. Essentially, any request that does not match a servlet or JSP URL is assumed to be a file name, and the file serving component will serve up a file by that name from the WAR. When an application is deployed to WebSphere Application Server, one of the steps is to regenerate the plug-in file and copy it to the Web server. When the plugin is regenerated, entries are created that act as "pass" rules, which tell the Web server to always forward certain requests onward to WebSphere Application Server and not try to serve them up itself. When the File Serving Enabler is turned on, the entry in the plugin file is wild-carded from the context root, as shown below in the code in Sample 1.

Sample 1. plugin-cfg.xml entry with file serving enabled

<Uri Name="/pts/*"/>

The File Serving Enabler can be turned off by unchecking this option for the Web module in the Application Server Toolkit (ASTK), or changing the ibm-web-ext.xmi file. When this is done, the plug-in entry for this Web module would appear quite differently; it includes pass rules for only the non-static content, as shown below in Sample 2.

Sample 2. plugin-cfg.xml entry with file serving disabled

<Uri Name="/pts/PTSMainServlet"/>
<Uri Name="/pts/*.jsp"/>
<Uri Name="/pts/*.jsv"/>
<Uri Name="/pts/*.jsw"/>
<Uri Name="/pts/j_security_check"/>

Consider a Web request sent from a browser for a simple HTML page with a few embedded graphics from our J2EE application, which has been deployed as described above, with WebSphere Application Server file serving enabled. The Web server will see the request and find the pass rule in the plug-in file, and correspondingly forward the request to WebSphere Application Server. At the Web container, the File Serving Enabler will read the file from the directory where the application has been deployed and send the request back. When the user's browser receives the HTML file, it will parse the HTML file and display it. During this process, the browser will see that there are a few graphics files it needs to render the page and will send requests for those, which will be handled similarly by the File Serving Enabler.

As you can probably guess, although this is the simplest deployment scenario, it is not the most efficient in terms of performance. Prior to WebSphere Application Server V5, deployers would often separate static content out to the Web server, and disable the WebSphere Application Server File Serving Enabler for performance. After all, Web servers were designed to serve static content and can certainly do this faster than an application server.The greater the amount of static content, the more benefit gained from this technique. However, this creates complications for deployment since content must now be deployed to both a Web server and to an application server. It also complicates the packaging process, as we can no longer simply deploy a single WAR file. (See Resources for the techniques and trade-offs associated with this approach.) In essence, there is a trade-off to be made between packaging simplicity and the need for performance improvements via caching. Fortunately, the WebSphere Application Server V5 Web server plug-in can remedy this situation. As we will discuss later, the Web server plug-in can be configured to cache static content in the Web server while allowing it to remain in the WAR.

Browser caching

To most Web users, the most familiar form of cache is the browser cache. Web browsers will typically retrieve files from Web requests, such as graphic files, and cache them as temporary Internet files on the user's local hard drive and then serve them from there upon the next request. There are two benefits to this: the user gets the file much faster, and the Web server is free to do more work for other users. However, this cached copy of the file is useful to only one person: the one on whose hard drive it is cached. For caching that benefits many users, we would have to cache files at the other tiers of the architecture. For example, consider 1000 requests for a page with a 100K graphic image. With browser caching alone, the first thousand users to hit this page will cause an end-to-end request-response from the browser to the file serving process on the application server and back to the user, resulting in WebSphere Application Server serving a total of 100Mb of redundant data. However, as we will see in the balance of this chapter, with other forms of caching, the first user's request will result in the file's placement in a cache that is closer to the users and less expensive to retrieve for the remaining 999 requests, resulting in WebSphere Application Server serving a total of 100K in a single request for the same amount of load.

Web server caching

Most Web servers can cache static files. IBM HTTP Server, which comes with WebSphere Application Server, has a facility called the Fast Response Cache Accelerator (FRCA) on AIX and Adaptive Fast-Path Architecture (AFPA) on Windows that can cache static content (both platforms) as well as dynamic content (Windows only). This is a kernel-based cache and, by far, the fastest cache of any discussed here. However, the static cache is limited to caching files that are served by the HTTP server itself, and not for those which are invoked via pass rules that forward the request to the application server. To use this cache for our scenario, we would have to split the static files out separately after deployment and place them on the Web server, rather than use the simple deployment and built-in WebSphere Application Server file serving facility. Figure 2, below, shows the routing for a request that is served up from the Web server cache.

Figure 2. Request/response routing for files served from a Web server's static cache
Figure 2. Request/response routing for files served from a Web server's static cache

WebSphere Application Server plug-in static caching

Another possibility for caching at the Web tier, which is specific to WebSphere Application Server, is to cache static files with the WebSphere Application Server plug-in. The WebSphere Application Server V5 plug-in incorporates the Edge Side Includes (ESI) feature for caching and assembly of pages and page fragments (both static and dynamic). ESI is an open standard written by IBM, Akamai, and others, for page/fragment caching and assembly. Fragments written with standard ESI markup language can be assembled together at the edge, but no special markup is needed for caching. Products such as IBM's WebSphere Edge Server, WebSphere Application Server V5 Network Deployment, and Akamai EdgeSuite are ESI-aware. (For more information on ESI, visit http://www.esi.org.)

For the purposes of this discussion, we will focus on the ability of the plug-in to use ESI to cache static content at the Web server, where the plug-in is installed. Referring back to our network topology diagram, this means that the flow of network hops for retrieving a static file cached with ESI would be as shown above in Figure 2, essentially the same as if it was served from the Web server's static cache. However, the benefits to using the plug-in are that static content can be left in the WAR, simplifying packaging, and the ESI cache can also cache dynamic requests, which will be discussed later. Caching at the plug-in reduces the burden on the file serving facility back at the application server. This does not perform as well as splitting the static content off separately to the Web server, but it is less administrative work for shops unwilling to take on the additional deployment steps.

Shown below in Sample 3 are the entries in the plugin-cfg.xml file for configuring the ESI. By default, ESI is enabled and the cache size is set to 1024 kilobytes (1 megabyte) of memory space in the Web server in which the plug-in is running. When this space is filled, entries will be purged from the cache based on their pending expiration (those closest to expiration will be purged first). The invalidation monitor setting is used for communication with WebSphere Application Server; it monitors messages from WebSphere Application Server that say when a given entry or group of entries are now invalid and should be purged from the cache. It also monitors what entries are in the static cache and accumulates other statistics.

    <Property Name="ESIEnable" Value="true"/>
    <Property Name="ESIMaxCacheSize" Value="1024"/>
    <Property Name="ESIInvalidationMonitor" Value="false"/>

Static cache entries in the ESI time out every 300 seconds (5 minutes) by default. This can be changed by placing the property shown below in Sample 4 on the application server's JVM command line parameters. The value provided is in seconds, so the value in Sample 4 specifies a two-minute timeout.

    -Dcom.ibm.servlet.file.esi.timeOut=120

WebSphere Application Server comes with two applications that are used for caching. Both are found in the installableApps directory and are described below.

ApplicationDescription
Cachemonitor.earAllows monitoring of both the ESI cache at the plug-in and the dynamic cache in the application server, and some basic operations such as clearing the cache contents.
DynaCacheESI.earConsists of a single servlet, which acts as an external cache adapter when it is installed at the application server. This enables the application server cache engine to gather cache statistics to display in the cache monitor application, and also to send cache entries and information related to them to the ESI cache at the plug-in.

By default, the plug-in will cache static files, but the DynaCacheESI adapter is not installed, meaning that the static cache entries will only expire after their configured timeout. By installing the DynaCacheESI adapter, WebSphere Application Server can send invalidation messages to the plug-in when the static files are updated, as well as provide the interface required for the Cachemonitor application to gather statistics and clear caches.

To use these applications, set the ESIInvalidationMonitor property on the plugin-cfg.xml file (shown in Sample 3) to true, install both applications, then regenerate your plug-in and restart the application server. Running one of the sample applications should result in cache entries. Figure 3 shows some ESI cache statistics from the cache monitor application, and Figure 4 shows some sample cache contents. Notice that there is a mix of graphics and JavaScript files in the cache from the monitor application itself, as well as the PetStore demo application. Static files are cached with no additional configuration.

Figure 3. ESI statistics from the WebSphere Application Server CacheMonitor application
Figure 3. ESI statistics from the WebSphere Application Server CacheMonitor application
Figure 4. ESI cache contents from the WebSphere Application Server CacheMonitor application
Figure 4. ESI cache contents from the WebSphere Application Server CacheMonitor application

Security Note: Under certain circumstances in WebSphere Application Server v5.0 and 5.1, unauthorized users may be able to access the secured static content from the ESI cache without authenticating through the WebSphere security infrastructure. An interim fix PQ81192 is available from the IBM WebSphere Application Server support Web site.


Dynamic caching

As described above, static caching is a valuable benefit, increasingly so with the amount and size of static elements in an application. However, with the current trend toward more personalized, portalized, and dynamic sites, dynamic caching is much more useful. In addition to the reasons cited above, dynamic requests tend to use far more enterprise resources in building a response. Dynamic content often requires the most resource intensive work in an enterprise system, so dynamic caching can dramatically enhance performance. Revisit Figure 1 and consider the amount of computing power required at each tier to build these requests. CPU utilization, memory, and connections are expensive at each tier, particularly in the application server's Web and EJB containers. Hitting backend database, host, EIS, and other servers, tend to be expensive. With every request served from dynamic cache, usage of these resources is spared and they are freed up to handle many more concurrent requests. This is very much preferred to buying additional servers to handle peak load.

Dynamic caching is more complex than static caching and requires detailed knowledge of the application. One must consider the candidates for dynamic caching carefully since, by its very nature, dynamically generated content can be different based on the state of the application. Therefore, it is important to consider under what conditions dynamically generated content can be cached to return the correct response. This requires knowledge of the application, its possible states, and other data, such as parameters that ensure the dynamic data is generated in a deterministic manner.

Dynamic cache in WebSphere Application Server 5 can be administered via the administrative console. Navigating through the admin console menu from Servers => Application Servers => server1, you would see Dynamic Cache Service under Additional Properties. Clicking on this would present a page similar to Figure 5, below.

Figure 5. Dynamic cache service settings in the WebSphere Application Server admin console
Figure 5. Dynamic cache service settings in the WebSphere Application Server admin console

Several of the available dynamic cache options can be seen here. Notice in particular that the cache size is in entries, as opposed to a physical memory size as we saw when we discussed the ESI caching facility of the plug-in. This makes sizing the cache tricky; it can be done by monitoring the cache statistics to watch for cache evictions during load testing and peak production periods. The Default Priority field shown here is related to eviction. Evictions are determined by a Least Recently Used algorithm. The priority is essentially the number of free passes an entry can have to stay in the cache once its number comes up from the LRU algorithm. This setting is rarely changed here, but you might keep this in mind when setting priorities on individual cache entries when you configure them. Giving them higher numbers will keep them in the cache longer, so you might want to do that for entries that are expensive to build. You might also notice here that there is a facility to offload cache entries to disk when the cache becomes full, rather than to purge them from the cache completely. While the pages will serve more slowly from the disk offload area, this is still generally better than invalidation, in which case the page results would have to be completely rebuilt by running the transactions again.

To use dynacache, servlet caching also needs to be enabled for the Web container. Navigating through the admin console menu from Servers => Application Servers => server1 => Web Container would present a page with the options below in Figure 6. Notice that the Enable servlet caching box is checked.

Figure 6. Web container settings to enable servlet cache
Figure 6. Web container settings to enable servlet cache

Dynamic caching options

The following features are available as part of the WebSphere Application Server dynamic cache service. All of these services are provided by the same caching engine in WebSphere Application Server, hence their configuration is similar. Each will be discussed later in more detail.

FeatureDescription
Servlet/JSP cacheThe servlet/JSP caching facility will catch the response from servlet/JSP invocation and cache the HTML results. Servlets are configured for caching via entries in the cachespec.xml file (described below). They can be designated by their URI path, or by classname. The latter option is more inclusive, since it will catch any invocation of the servlet regardless of the aliases. Usually, the servlet is cached by its alias, since different aliases often imply different actions. Determining whether to cache on URI or classname depends entirely on the application. In most cases, the cache entry for the servlet needs to be further qualified by additional inputs, such as the request parameters or values from the user session information. We will explain this further in our section on specifying cache entries.

When preparing to execute a servlet, the dynacache engine filters the service() method of each servlet that is about to be executed, and determines if this matches any cache ID entries based on the parameters present. If a match is found, the cached results are returned rather than executing the service() method and all of the work that would have been done beyond it. This avoids all of the processing that would have been done by the servlet, resulting in a substantial performance boost. If there is not a cache entry for this ID, the service() method is executed as normal and the results are caught and placed in the cache before they are returned to the user.
Command cacheThe command cache can cache the results of the invocation of server-side commands, which implement the WebSphere Command Pattern interfaces. This is useful for caching intensive operations, such as complex SQL joins or host requests. The results here are typically an object or container of objects, rather than the HTML results that are held in the servlet/JSP cache. The command pattern is widely used and relatively simple to retrofit to an existing application, should you feel it could benefit from this type of caching. Retrofitting involves writing thin command wrappers around the existing code. Similar to how the caching engine filters on a servlet or JSP's service() method, the command cache filters on a command's execute() method to determine if the command is cacheable and if there are already results in the cache.
Web services cacheThis cache holds the result of Web services SOAP invocations. Web services SOAP calls can be expensive, primarily due to the extensive parsing of XML files that must happen on both ends. Cache identifiers can be built from both HTTP headers and the SOAP envelope; in fact the entire SOAP envelope could be hashed and used as a cache ID. (Refer to the WebSphere Application Server V5 Information Center for more information.)
Distributed object cacheThis cache holds Java objects for use in a distributed environment. For example, objects may be stored by one application server and then retrieved by other application servers in the same Data Replication Service (DRS) cluster. However, this cache is only available in the Enterprise edition of WebSphere Application Server. These cache instances are retrieved by a JNDI name that is configured on the Cache Instance resource (which is similar to a JDBC resource). This cache can even be configured so that objects are persistent, flushed to disk when the server is stopped, and loaded again upon restart. Individual entries can be designated as non-shared, push (sent to all servers when they are cached), or pull (only their names are sent, and values are retrieved only when "pulled" from other servers). (Refer to the WebSphere Application Server V5 Enterprise Information Center for more information under Using the DistributedMap interface for the dynamic cache.)
Security Note: Dynacache is not architected for maintaining security sensitive information. While the use of DRS encryption can protect information from attack from processes outside of a WebSphere Application Server environment, there are no provisions for protecting information within the application server. Thus, any information that is cached (servlet cache, object cache, Web services cache, or command cache) can be accessed by any application within the same application server. If you are unable to trust applications within the same application server, do not place sensitive information in the cache. Fortunately, since caching is generally used to cache shared information, this restriction is usually not a major problem.


Invalidation

We saw that, with static files, invalidation is pretty easy. HTML files often have expiration tags in their headers and other types of files have a simple timeout configured for them. However, with dynamic caching, invalidation is more complex. Dynamic caches have the following types of invalidation.

Dynamic caching concepts
Simple timeoutOn the cache specification for a particular cache entry, you can specify a default timeout value. This could be short term (seconds) or long term (years). It could also specify infinity.
Dynamic invalidationSuppose you are caching long term data, such as a baseball team's schedule for the current season. In this case, a reasonable timeout value could be the end of the season, before playoffs start. However, a rainout can easily change that. For situations like this, dynamic invalidation can be used. This can be done by virtue of cache entries that are designated as "Invalidate Only" (as we will see in the following section on creating cache specifications), or via a programming API that is provided with the dynacache engine. In the former case, you might configure an invalidation cache entry for the servlet that is executed to update the baseball team's schedule, so that when the dynacache engine sees this servlet executed, it knows to invalidate any cache entries that hold schedule data for this team. Your developers might choose instead to execute an invalidation API directly in their code, which is probably less desirable. For portability, applications should execute with little or no knowledge about their environment.
Cache eviction Cache entries are invalidated when the cache is full and they are purged to make way for new entries. For a static cache, this is simply which entry is closest to timing out. For a dynamic cache, a Least Recently Used algorithm is executed to determine which entries should be purged, as well as the priority assigned to each cache entry (see the above discussion for how this works).

Specifying cache entries

Cache entries are specified in the cachespec.xml file. This is a file that holds cache specification and invalidation policies. It can be placed in the <was-root>/properties directory for global cache specifications (there is a sample file already here that will cache the snoop sample servlet if it is renamed to cachespec.xml), but more likely you will want to define these files per application module and place them in the WEB-INF directory of each Web module or META-INF of the enterprise bean directory. These files are reloaded by the application server based on the configured time interval. Below are example cachespec.xml files for a JSP (Sample 5), command (Sample 6), and Web service call (Sample 7). Each of these will be discussed in turn below, following a general description of the entries in cachespec.xml. (See the WebSphere Application Server Information Center for more detailed instructions on configuring cache policies, or review the cachespec.dtd file in the application server properties directory.)

Components of the cachespec.xml file

<cache>The root element of the cachespec.xml file, appearing only once. It holds multiple <cache-entry> stanzas.
<cache-entry>There is one of these for each item to be cached. Cache entries can describe items to be cached, items which will invalidate other cache entries, and dependencies between cache entries. We will see examples of each below.
<class>Identifies the type of entry. Possible values are command, servlet, and webservice.
<name>The name of the item to be cached. For commands, it should be the fully qualified package and class name, including the .class suffix. For servlets or JSPs, it should be the URI path relative to the application's context root. For example, if the full URL to the JSP is http://www.myco.com/myapp/products/catalogList.JSP, the value here would be /products/catalogList.JSP. If you were using the cachespec.xml file from the application server properties directory (global), the entire URL would be necessary. If you have multiple servlet aliases, multiple <name> stanzas could be included here to list each alias that you intend to cache.
<property>Used to set optional properties on a cache entry, such as whether it can be cached outside of WebSphere Application Server, and whether this entry can be persisted to disk. (There is a list of definable properties in the WebSphere Application Server V5 Information Center.) There can be multiple properties per cache entry.
<sharing-policy>Determines whether a cache entry should be shared between distributed caches, and if so, how that should occur. Distributed caches will be discussed later.
<cache-id>Where the cache identifiers for each cache entry are configured. This is a key, similar to a database key structure that identifies a particular cache entry as unique. Caching is generally not as simple as specifying a particular servlet, JSP, or command to cache. These must be qualified with attributes that distinguish one cache entry from another. For example, caching a servlet that returns a weather forecast by the URI /weather/forecast would not be desirable; think of what would happen if one user asked for the forecast for Miami and that result was cached, and the following user invoked this URI to get the forecast for Anchorage. In this case, there is probably a request parameter that is sent along with the servlet invocation that has the city name or zip code for the forecast. This is the piece of data that would ensure the cached results are unique and match the request, so, for example, Miami forecasts are always returned to other users asking for Miami weather. This parameter should be designated as required, as the cache entry is meaningless without it. Sample 5, below, shows a similar cache specification. There may also be other parameters as part of the cache ID, such as whether the user is asking for a short-range or long-range forecast. Each component of the cache ID is specified in a <component> tag, as shown in the examples below.

Not surprisingly, how the cache ID is computed depends upon the technology. Servlet/JSP cache IDs can be composed of request parameters/attributes, path information, header values, request locale, cookie values, and even HTTP session values. It is best to avoid using server-side values, such as HTTP session data, since use of those would negate moving the cache entry out past the application server's cache (hence, it could not be marked edge-cacheable). We will discuss edge caching later. Using path information is useful for applications based on the Struts programming model, since servlet names/aliases can be dynamic, only having the .do suffix to identify them. For applications that use controller servlets, the same servlet may be marked as cacheable when its action parameter has the value of "list", and marked as invalidate when the action parameter has the value of "update". The cache ID for a command can be based on a method in the command object, while Web services cache IDs are based on information from the SOAP request. While not shown here, when using the Distributed Object Map, cache IDs are specified programmatically. There can be multiple cache IDs for each cache entry. Now that we are familiar with the basic structure of the cachespec.xml file, we will examine a few sample entries.

Sample 5. Sample cachespec.xml file for a JSP

<cache>
   <cache-entry>
      <class>servlet</class>		            
      <name>/displayForecast.jsp</name>
      <property name="EdgeCacheable">true</property>
      <cache-id>
         <component id="zip" type="parameter">
            <required>true</required>
         </component>
         <priority>3</priority>
         <timeout>20</timeout>
      </cache-id>
   </cache-entry>
</cache>

Sample 5, above, shows a simple cachespec.xml file with a single cache entry for a JSP. You might notice in the above figure that the class is servlet, while the name clearly identifies a JSP. This is because JSPs are compiled into servlets on the backend, so from the application server's perspective, they are essentially the same thing. This JSP has a property that states that it is "edge cacheable". (We will discuss caching on the edge later.) Notice that the cache ID is composed of a single request parameter, the zip code for the forecast. This means that if seven users request forecasts for seven different zip codes, there will be seven cache entries in the cache with the zip code serving as their key. Subsequent users requesting forecasts for any of these seven zip codes will be served up the forecast for the appropriate area. This cache entry has been configured with a priority of 3, meaning it gets three "free passes" if it is designated as ready to be invalidated by the LRU algorithm, mentioned earlier in our discussion on invalidation. The entry is set to a default timeout of 20 seconds; however, it could be invalidated earlier by means of a program API, another cache entry that is designated to invalidate it, or purging due to a full cache.

Sample 6. Sample cachespec.xml file for a command

<cache>
   <cache-entry>
      <class>command</class>
      <sharing-policy>not-shared</sharing-policy>
      <name>com.myco.productapp.ProductListCommand.class</name>
      <sharing-policy>not-shared</sharing-policy>
      <cache-id>
         <component type="method" id="getProductCategory">
            <required>true</required>
         </component>
         <priority>1</priority>
         <timeout>3600</timeout>
      </cache-id>
   </cache-entry>
</cache>

Sample 6 shows a simple cachespec.xml file with a single cache entry for a command. The <class> is, appropriately, "command", and the name is the fully qualified package and class name. This entry is not to be shared between distributed caches, as designated by the <sharing-policy>. The cache ID is obtained this time from the result of calling the getProductCategory() method on the ProductListCommand object. This indicates that product list entries are cached by some sort of category; for example, sporting goods, clothing, or hardware. As you might expect for something like a product list, this cache has a longer timeout value of one hour (3600 seconds), as product catalog entries are not likely to change very quickly.

Sample 7. Sample cachespec.xml file for a Web service call

<cache>
   <cache-entry>
      <class>webservice</class>
      <name>/soap/servlet/soaprouter</name>
      <cache-id>
         <component id="" type=SOAPAction>
            <value>urn:stockquote-lookup</value>
         </component>
         <component id="Hash" type="SOAPEnvelope"/>
         <timeout>600</timeout>
         <priority>1<priority>
      </cache-id>
      <cache-id>
         <component id="" type="serviceOperation">
            <value>urn:stockquote:getQuote</value>
         </component>
         <component id="Hash" type="SOAPEnvelope"/>
         <timeout>600</timeout>
         <priority>1</priority>
      </cache-id>
   </cache-entry>
</cache>

Sample 7 shows a simple cachespec.xml file with a single cache entry for a Web service call. The class designation is obvious, and the name is the URI path to the soaprouter servlet, familiar to those who work with Web services. There are two cache IDs configured for this entry. The first is a combination of a stock quote lookup request and the SOAP envelope, and the second is a combination of a getQuote service operation and a hash of the SOAP envelope. With two cache IDs, the caching engine will parse from the top down and use the first one that fits the particular entry that it is working with at run time for either inserting a new entry into cache or returning a cache entry for a request.

Now let us look at a more complex example of a cachespec.xml file, shown in Sample 8, below.

Sample 8. Sample cachespec.xml file for a Web service call

<cache>
   <cache-entry>
	<class>servlet</class>
	<name>/ProductControllerServlet</name>
      <cache-id>
         <component id="action" type="parameter">
	    <value>view</value>
            <required>true</required>
         </component>
         <component id="productID" type="parameter">
            <required>true</required>
         </component>
         <priority>3</priority>
         <timeout>20</timeout>
      </cache-id>
      <cache-id>
         <component id="action" type="parameter">
	    <value>view</value>
            <required>true</required>
         </component>
         <component id="category" type="parameter">
            <required>true</required>
         </component>
         <priority>3</priority>
         <timeout>20</timeout>
      </cache-id>
      <dependency-id>category
         <component id="category" type="parameter">
            <required>true</required>
         </component>
      </dependency-id>
      <invalidation>category
         <component id="action" type="parameter" ignore-value="true">
	    <value>update</value>
	    <required>true</required>
	 </component>
	 <component id="category" type="parameter">
	    <required>true</required>
         </component>
       </invalidation>
   </cache-entry>
</cache>

Sample 8 shows a cachespec.xml file with a single cache entry for a controller servlet, such as you might find in a MVC-type application. From the top down, you see:

  • A cache ID for when the product controller servlet is run with a parameter named "action" with a value of "view", using the product ID. This probably indicates a view for a single product's information. Notice that not only is the parameter name of "action" provided as part of the cache ID, but a required value of "view" for that parameter is provided as well.
  • A cache ID for when the product controller servlet is run with parameter "action" with value <view" and a category parameter is provided. This would be an example of when a page of products in a certain category are requested.
  • A dependency ID named "category" and using the parameter name "category" and its value as the cache ID. This and other cache entries can have the same dependency ID and will be invalidated as a group if certain events occur (such as an invalidation rule firing).
  • An invalidation rule named "category", which causes it to be linked to the category dependency ID. It requires that the "action" parameter of this servlet have the value "update". Based on the linkage, when the servlet is run in update mode for this category, all cache entries for both the individual product view and the list for that category will be invalidated.

Planning for caching

Caching is best done as a planned activity, not as something that is retrofitted on an existing application. The latter is quite possible, but not as easy or fruitful. Obviously, the best candidates for caching are operations (or files) that are large, slow, or are expensive to produce. They should also be public -- the more users that can take advantage of a given cache entry the better. Therefore, private data is not a good candidate for caching. Pages that have been personalized to include information specific to a particular user make poor candidates for caching, but there are ways around that with good design. Consider an airline frequent flyer page that presents the user's name in a welcome message, as well as their current status and point total, promotions available based on their status, as well as other generic promotions and messages for the general populous. If this were all one JSP file, it would not be very useful for caching since it has data specific to one user. However, if the page were to be broken into individual JSP fragments, as shown in Figure 7, below, there would be much greater potential, particularly if those fragments were cached and assembled at the edge with ESI.

Figure 7. Sample page broken into cacheable fragments
Figure 7. Sample page broken into cacheable fragments
User.jsp This fragment has the user's name, status, current points, and other personal info. It would probably not be cached because only one user could benefit from the cache entry.
Status.jsp This fragment would hold the status (such as Platinum, Gold, etc.) and special promotions and messages specific to that status. The cache ID would likely be the status name and there would likely be one entry in the physical cache for each one, shared by all users of that particular status.
Main.jsp This fragment would contain the generic portions of the page; perhaps general news and promotions, and links for contact info or search. This one is an excellent candidate for caching.

To introduce another common dilemma with caching, let us assume that there is also a fragment that would hold the weather forecasts for five cities. Each user could designate which cities to show the forecast for (perhaps the ones they visit most often), and it would be there each time they visited the page. The forecast is slow to retrieve; it must contact a backend computer. At first glance this seems like a good cache candidate; after all, the forecast is public, shared data. However, not too many users are likely to have the same five cities on their fragment, and the possible permutations would mean a potentially huge number of entries in the physical cache. A possible solution here is to not use the servlet/JSP cache, but to cache the forecasts for all major cities using the command cache. Then, no matter which cities the user has designated, they are likely to come from cache and not the slow transaction. If the list of potential cities was kept small, they could still be individual fragments themselves, pulled into the larger fragment of five favorites.


Caching further out

One of the powerful features of dynacache is that once content has been configured for caching, the cache can potentially be moved close to the edge of the network, closer to the user population. This eliminates network hops. For example, a cached page fragment is normally served from the app server, but that fragment can be pushed out to the Web server or even a proxy server. WebSphere Application Server manages this caching "further out" by using external caches.

External caches

The final item of note in the page displayed in Figure 5 is External Cache Groups. The dynamic cache service in WebSphere Application Server needs to keep track of external caches with which it communicates. In the section on the ESI processor, it was noted that you had to install an application called DynaCacheESI. If you have done that, clicking on External Cache Groups would show the ESIInvalidator servlet that was installed as a result. Essentially, any external cache (such as the ESI cache at the plug-in, the WebSphere Application Server V5 Network Deployment Caching Proxy, or the AFPA/FRCA cache in IBM HTTP Server) must have an external cache adapter and group defined here. Cache entries in external caches can be grouped together for easier invalidation of related entries; they can be invalidated as a group rather than one by one.

External caches

Earlier, we configured the WebSphere Application Server plug-in at the Web server for static file caching. These steps (changing a parameter in the plugin-cfg.xml file and installing the DynaCacheESI application on the application server) also setup the plug-in for dynamic caching. When the DynaCacheESI application was installed, it defined to the application server an external cache residing at the ESI plug-in on the Web server. If a particular cache entry has the property EdgeCacheable set to true, as shown in Sample 5 then when it is requested, it will be pushed out to and served from the cache at the plug-in, further lightening the application server's load on future requests. The application server will notify the plug-in cache when the entry is invalidated. As mentioned earlier, for cache entries to be edge cacheable, they should avoid using server-side values such as the data from the user's HTTP session as part of their cache identifiers.

Caching at the Edge

Proxy servers, such as the one included with WebSphere Application Server V5 Network Deployment Edge components, can cache content before the Web server, removing even more network hops.

In the simplest case, the proxy server can cache just static files. Proxy servers are typically dedicated to serving content from backend servers, such as Web servers, and rarely serve their own content. A caching proxy can cache static content from these backend servers. The proxy server included with WebSphere Application Server V5 Network Deployment can cache static and dynamic content. To demonstrate the effect of caching at the proxy server tier, refer to Figure 8, an updated version of the network diagram we have been using. You can see how this would have a profound impact on response time, performance, and resource utilization when static and dynamic requests are served from caches this far on the edge. Figure 8 shows a reverse proxy configuration in the DMZ. For even more efficiency, some companies will implement a forward proxy configuration, with proxy servers placed at remote offices to cut down on internet/intranet traffic.

Figure 8. Proxy server serving requests from its cache
Figure 8. Proxy server serving requests from its cache

There are a few differences between the WebSphere Application Server and plug-in cache and the Caching Proxy cache. The Proxy Server dynamic cache can only cache full pages, not fragments, and it can only be configured for either memory or disk cache, not memory cache with a disk overflow, as WebSphere Application Server and the plug-in can. If a disk cache is used, it must be configured for a specially formatted partition on the hard drive. While it is slower than the in-memory cache, benefits to the disk cache are that it can be much larger, and that it is persistent, thereby surviving proxy server crashes or restarts.

In most cases, these factors tend to lean in the favor of the using the plug-in for static and dynamic caching, particularly since it is easier to use and avoids the extra tier for proxy servers. So, unless specific requirements mandate their use, you might not want to use the Edge Components Proxy Server for caching purposes alone. A good example of when to use the Proxy Server might be when the forward proxy configuration is desired to position the cache at the front of the network, closer to the users, to save network bandwidth.


Advanced caching topics

Adequately covering the breadth of caching and all advanced topics would require a dedicated book on the subject, but we did want to draw your attention to two items of importance.

First, while all of our examples have focused on caching using configuration options, it is also possible to have your developers write code for dynamic cache hooks that can make caching, cache ID, and invalidation decisions "on the fly". For example, certain entries may only be cached at certain times of the day, or only after a certain number of repeated hits, and perhaps invalidated more often as resources become scarcer. This is covered in more detail in the WebSphere Application Server V5 JavaDoc; see the JavaDoc for packages com.ibm.websphere.cache and com.ibm.websphere.servlet.cache.

Second, when trying to cache content, it is important to know that HTML pages often have header tags to thwart caching. Thus, you may think you are caching content when you are in fact not, because the HTML says the content should not be cached. Caching engines such as those in the WebSphere Application Server V5 Network Deployment Caching Proxy and IBM HTTP Server have aggressive caching features that can be tuned to ignore these types of HTML hints. You should always consult with your development team, as with any caching configuration, to ensure that you are not causing undesirable behavior. As with invalidation, you must know the application to understand when cached data becomes stale. This can be the hardest part of caching. Refer to the Information Cen-ter for both products for more details on configuring these options.

Data Replication Service

Another feature shown in Figure 5 is the ability to use cache replication. This allows multiple application servers in a cluster to share cache entries between themselves, and to communicate invalidation messages related to those entries. This feature relies on the WebSphere Application Server Data Replication Service (DRS). DRS is also used to dynamically replicate a user's session information between multiple application servers. The replicated cache can be configured as a shared, central cache for others to pull from, or a cache that is replicated to all servers. Sample 6 illustrates the <sharing-policy> tag that specifies whether a cache entry should be excluded from cache sharing, or how it should be shared if it is included. There are options to specify whether the entry should be pushed out to other caches automatically, or pulled on demand. This is an advanced configuration, requiring some understanding of configuring DRS and the creation of replication domains, which is discussed in the WebSphere Application Server Information Center.

Security Note: DRS communication is insecure by default. This yields the maximum performance, but if you are concerned about others seeing or altering the cache replication between servers, you should enable DES encryption.


Troubleshooting caching problems

Several tools are useful for troubleshooting caching problems. At the application server, detailed traces can be obtained by using the powerful WebSphere Application Server trace facility. This can be enabled by selecting any of the cache groups under the trace configuration page, or by using the generic trace string com.ibm.ws.cache=all=enabled. In analyzing a trace file, one can view the decision process for placing and serving cache entries, and often the reasons that they are not behaving as expected. Remember that the trace facility also exists at the plug-in so tracing can be done there as well. The cache monitor application mentioned earlier in this chapter is also quite useful. Drilling down into the links provided will show details on cache identifiers and other valuable information. If you are not seeing a servlet cached, make sure that all aliases for that servlet have been configured for caching. In general, anything that can be served up from the browser address bar can be served up by the servlet/JSP cache.

In troubleshooting static file caching, the log files are quite useful. The IBM HTTP Server and WebSphere Application Server V5 Network Deployment Caching Proxy have separate log files to show cache hits, as well as the standard access logs to show cache "misses". Remember that if you do not see entries in these logs, your request may well have been served up right from the browser's temporary internet files. For this reason, it is often useful to test caching from different workstations, and to delete temporary internet files between requests. The administrative console for the WebSphere Application Server V5 Network Deployment Caching Proxy has a proxy access page where entries returned from cache are shown in blue, but it is only the last few dozen requests; the log must be viewed for a complete list.

Remember, with any caching it may take a few "hits" to load the cache -- do not expect to see the entry there on the first request.


Conclusion

In this article, we covered the many caching options available with WebSphere Application Server. With this information, Figure 9, which shows the WebSphere Application Server caching architecture, might be useful and interesting.

Figure 9. WebSphere Application Server dynamic caching architecture
Figure 9. WebSphere Application Server dynamic caching architecture

Caching is an advanced topic, well worth further investigation considering the tremendous performance improvements that are possible. Further information can be found in the WebSphere Application Server Information Center.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere
ArticleID=14626
ArticleTitle=IBM WebSphere Developer Technical Journal: Static and dynamic caching in WebSphere Application Server V5
publish-date=05122004