Know your WebSphere Application Server options for a large cache implementation

Considering ObjectGrid as an alternative to a 64-bit JVM

Caching large amounts of application data doesn't always mandate the use of a 64-bit JDK in order to leverage 64-bit addressing. The ObjectGrid component of IBM® WebSphere® eXtreme Scale provides a 32-bit JDK alternative that you can use in your existing infrastructure without requiring additional physical memory on your servers. This content is part of the IBM WebSphere Developer Technical Journal.

Tom Alcott, Senior Technical Staff Member, IBM

Tom AlcottTom Alcott is Senior Technical Staff Member (STSM) for IBM in the United States. He has been a member of the Worldwide WebSphere Technical Sales Support team since its inception in 1998. In this role, he spends most of his time trying to stay one page ahead of customers in the manual. Before he started working with WebSphere, he was a systems engineer for IBM's Transarc Lab supporting TXSeries. His background includes over 20 years of application design and development on both mainframe-based and distributed systems. He has written and presented extensively on a number of WebSphere run time issues.


developerWorks Professional author
        level

April 2009 (First published 23 January 2008)

Also available in Russian

Introduction

In a Comment lines piece I wrote a while back, I provided advice that cautions against moving to a 64-bit JVM unless your application either requires a large cache or can take advantage of 64-bit arithmetic. Some companies I have worked with as of late are in fact trying to manage applications with a large cache, and as a result have taken this advice to heart. They are now going forward with a plan to move from a 32-bit JVM to a 64-bit JVM.

What I didn't mention previously is that there is at least one 32-bit option for dealing with a large cache: the ObjectGrid component of IBM WebSphere eXtreme Scale (formerly WebSphere Extended Deployment Data Grid). Therefore, if you're trying to manage an application that requires a large JVM as the result of a large cache, then before you start moving to a 64-bit JVM, you should consider the advantages and disadvantages that both a 64-bit JVM and ObjectGrid provide, and then decide on the option most appropriate for your environment.

For purpose of this discussion, I'm considering any JVM with a maximum heap of more than 2GB to be a "large JVM."


Is a 64-bit JVM for you?

In that column, I stated:

"Keep in mind that those moving from 32-bit to 64-bit often see no performance benefit, and instead simply experience a larger memory footprint; this, because 64-bit addresses are twice the size of 32 bit addresses."

The larger memory footprint means you should very likely plan on adding additional RAM to your server. The exact amount of additional RAM required will depend on how much RAM you're currently using and how much free memory you currently have, but you should not be surprised if you end up having to double the RAM on your server in order to run a 64-bit JVM with the same heap size as your current 32-bit JVM. As an example, one recent client was comfortably running a 32-bit application server JVM with a maximum heap of 768MB on a system with 2GB of RAM, but when they moved to a 64-bit JVM, they ended up going to 4GB of RAM for the same sized application server JVM.

You might be thinking "How can a JVM with a maximum heap of 768MB result in a process footprint of more than 768MB? That doesn't make sense!" The heap is just one portion of the JVM, there's also the interpreter, which adds anywhere from 20% to 50% to the maximum heap size in terms of process footprint, depending on operating system, JVM, and heap size. As a result, for this customer, the 32-bit JVM with a 768MB maximum heap that had a process footprint of ~950MB for their hardware and software configuration, required ~1.9GB on a 64-bit JVM -- which, with only 2GB of RAM, didn't leave any memory for the operating system or other processes running on the system.

Aside from adding RAM, or at least ensuring that you have sufficient RAM for a 64-bit JVM, you'll also need to spend time tuning the garbage collection (GC) algorithm for the JDK in use. The IBM-developed J5SE and later implementation (Java™ 5) for WebSphere Application Server (meaning the J5SE used on Windows®, Linux®, AIX®, iSeries® and System z) offers two GC memory models, the default is a "flat" memory model, which is what has been used in prior (JDK 1.4.x and before) IBM-developed Java implementations, as well as a generational GC memory model.

For performance reasons, when dealing with a large heap you're going to want to use a generational GC memory model (on Sun™ and HP, the JDK only offers a generational GC memory model), which, on the IBM JDK, is set via the command line argument:

-Xgcpolicy:gencon

The reason that a generational GC memory model is preferred with large JVMs has to do with minimizing the amount of time spent performing GC. In a generational GC memory model, objects are initially created in the young generation space, or "nursery" as it's commonly called, and if they survive several GCs in the there, they are then moved to the old generation space. Further, there are two types of GC that occur with generational GC:

  • Minor collection takes place only in the young generation; this is typically done through direct copying, and as a result this is very efficient (quick).
  • Major collection takes place in the old generation and uses the usual mark and sweep algorithm.

As you might have guessed, correctly sizing the "nursery" and old generation spaces is the key to minimizing the amount of time spent in either a minor or major collection, and in the case of large cache, you want to size the old generation to hold all the cache as well as other "long lived" objects. Too small of an old generation will cause excessive GC or even out-of-memory conditions. The best way to determine the tenure space size is probably to look at the amount of free heap that exists after each GC in default mode (%free heap x Total heap size). You should also analyze GC logs to understand how frequently the tenured space gets collected; an optimal generational application will have very infrequent collection in the tenured space.

Young generation sizing is set via -Xmn (-Xmns/-Xmnx) and old generation sizing is set via -Xmo (-Xmos/-Xmox).

A large nursery is typically "good for throughput," while a small nursery is typically "good for low pause times," and good WebSphere Application Server performance (throughput) usually requires a reasonably large nursery. A good starting point would be 512MB, then move up or down to determine optimal value, measure throughput or response times, and analyze GC logs to understand frequency and length of scavenges.

Even with the memory footprint reduction provided by compressed references in the latest IBM-developed Java 6 SE implementation, you must proceed with caution when considering a move to a 64-bit JVM. Although compressed references help to mitigate the memory overhead of 64-bit versus 32-bit, there is still some overhead associated with 64-bit. This means that 64-bit isn't completely transparent in terms of either the OS memory footprint or performance. With a large and seldom changing cache, the performance advantage of data retrieval from cache can alleviate the performance impact that results from 64-bit overhead.

More information on tuning the JDKs supplied with WebSphere Application Server is available in the WebSphere Application Server Information Center (see Resources).


Is ObjectGrid for you?

If you're in the position of needing large JVMs for your application servers, then ObjectGrid is worth serious consideration. Recall that, for the purpose of this discussion, that I consider any JVM with a heap size of 2 GB and above to be "large." That's not to say that you need a large heap to benefit from ObjectGrid, far from it. Caches of any size help improve performance and throughput, but ObjectGrid's ability to store large amounts of data, and do it with 32-bit JVMs, makes it an ideal alternative to moving to 64-bit JVMs and the associated memory overhead.

As mentioned earlier, ObjectGrid is part of WebSphere eXtreme Scale. If you're not familiar with it, ObjectGrid is a flexible, grid-enabled memory data store for Java applications, with a number of deployment and programming options. The simplest option is to use it as an in-memory database or cache for a Java Platform, Standard Edition (Java SE) or Java Platform, Enterprise Edition (Java EE) application. Each ObjectGrid is composed of one or more maps, with each map consisting of a set of key and value pairs. Of particular interest for this discussion are two ObjectGrid capabilities:

  • ObjectGrid can be deployed with a client server deployment model, using either static clusters or dynamic clusters. A dynamic cluster uses a catalog service to maintain a list of server process JVMs, which host ObjectGrid application containers. The catalog server provides several services to an ObjectGrid deployment (location service, placement service, health monitoring, and administration access). In turn, clients can interact with these ObjectGrid servers to access cached content (Figure 1), which in turn enables an application server "client" to offload the majority of the cache to another process while still keeping the most frequently accessed data in a local cache.

    Figure 1. ObjectGrid deployed with a client server deployment model
    Figure 1. ObjectGrid deployed with a client server deployment model
  • The ability of ObjectGrid to automatically partition a map (or maps) across multiple ObjectGrid JVMs is the key to maintaining large amounts of data: spreading the data cross multiple ObjectGrid servers, each of which is running in a 32-bit JVM, without the footprint overhead of a 64-bit JVM. Additionally, ObjectGrid provides for replication of the data, eliminating a single server as a single point of failure. Both partitioning and replication are shown in Figure 2. These capabilities are essential for considering ObjectGrid as an enterprise-capable alternative to a 64-bit JVM.

    Figure 2. ObjectGrid partitioning maps across multiple JVMs
    Figure 2. ObjectGrid partitioning maps across multiple JVMs

The ability to create an ObjectGrid cluster, independent of the application client, enables the deployment of an ObjectGrid server tier external to the application clients, as shown in Figure 3. Although the figure depicts multiple application client types, in the context of this discussion the clients would be WebSphere Application Servers, which would no longer each need to maintain a copy of the cache. This, in turn, would negate the need for running the application servers as 64-bit JVMs. Further, the ability to partition the data in the ObjectGrid maps across multiple ObjectGrid server instances means that each of the ObjectGrid servers could be implemented as a 32-bit JVM, with each server potentially storing up to 4GB of a much larger multiple GB cache. The result is substantial savings in RAM, since the 32-bit JVMs do not incur the memory overhead of a 64-bit address footprint, and each application server woulkd no longer need to maintain a copy of the entire cache (though a local cache could be maintained if desired).

Figure 3. ObjectGrid cluster
Figure 3. ObjectGrid cluster

For applications that are currently maintaining large caches, migration to an ObjectGrid-based solution should be straightforward, since the ObjectGrid ObjectMap API is very similar to existing standard map-based APIs, like HashMap. The application flow is typical of any cache implementation pattern. Essentially, when using ObjectGrid the application:

  1. Begins an ObjectGrid session.
  2. Gets an ObjectMap.
  3. Puts data into the ObjectMap.
  4. Commits the session.

The ObjectMap is a map local to the application; in other words, the map exists in the heap space of the same JVM on which the application is running. When the application puts to the ObjectMap, data is placed in the local heap. When the application commits the session, ObjectGrid copies the data into a BackingMap. The BackingMap is part of the set of ObjectMap-based APIs; it resides in the heap space of an ObjectGrid server (a container server, for example) and is a map that contains the committed data. By committing the data to the BackingMap, the application makes the data accessible to other applications (or other application threads, if the application is running in a concurrent environment). Most important, unlike other many other distributed cache solutions, the configuration of an ObjectGrid cluster, as well as the qualities of service (like synchronous or asynchronous replication, replica placement by "zones," number of partitions, and so on) are all governed by a couple of XML property files, which can be easily changed as needed (see Resources).


It's not just for cache

Maybe your reason for needing a large JVM isn't the result of application caching, but a large number of HTTP server objects or large HTTP session objects, since HTTP is often employed as an implicit application cache. ObjectGrid provides a built-in mechanism for offloading HTTP session data via a servlet filter. The servlet filter provides an ObjectGrid HTTP session manager for applications in a J2EE EAR file, and is added administratively via the addObjectGridFilter script, which adds the appropriate filter declarations and configuration to the Web applications in the form of servlet context initialization parameters -- without requiring any application changes. An application deployment using the ObjectGridFilter for HTTP session data is shown in Figure 4.

Figure 4. Using ObjectGridFilter
Figure 4. Using ObjectGridFilter

Offloading HTTP sessions in this manner, using 32-bit JVMs, provides for a substantial saving in memory, again by avoiding the overhead of 64-bit addressing.


What about DynaCache and Distributed Map?

No, I didn't forgotten about these, but unfortunately, neither DynaCache nor Distributed Map provides an alternative to a 64-bit JVM when you need a large application server heap. If you're not familiar these two functions, DynaCache is a WebSphere Application Server feature that caches the generated output of servlets, JSPs, Web services, and WebSphere Application Server commands, while Distributed Map provides an API for creating a local or remote cache in WebSphere Application Server instances. Since both of these APIs rely on the WebSphere Application Server runtime, the ability to offload the cache into a standalone caching tier is missing. As a result, they don't provide an alternative to running an application server with a 64-bit JVM to hold the cached data. While Distributed Map does provide for disk offload (which could be used to limit the size of the cache to a 32-bit address space), offloading in this manner induces latency as data is written and retrieved from disk, which might not provide acceptable performance for some time sensitive applications. Additionally, as noted, DynaCache is specific to the generated output of the application artifacts, so there is no mechanism for retrieving the underlying application data for additional use by the application.


Conclusion

If faced with applications that use large amounts of memory, either for cache or HTTP sessions, ObjectGrid provides an alternative to moving to a 64-bit JVM. Both alternatives provide advantages and disadvantages: moving a cache to ObjectGrid will require some application remediation, but this effort should be minimal, and provides a cost effective and high performance alternative to adding RAM and extensive garbage collection JVM tuning that is required with a 64-bit JVM. The appropriate choice for your environment will depend on your existing application inventory, as well as resource staffing availability within your desired implementation timeframe.


Acknowledgements

Thanks to Larry Clark of IBM Software Services for WebSphere for his comments and suggestions.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere
ArticleID=282849
ArticleTitle=Know your WebSphere Application Server options for a large cache implementation
publish-date=04232009