In each column, The WebSphere® Contrarian answers questions, provides guidance, and otherwise discusses fundamental topics related to the use of WebSphere products, often dispensing field-proven advice that contradicts prevailing wisdom.
Why some old tricks might not work with new versions
At the many IBM® WebSphere Application Server V7 workshops I've delivered to customers over the past several months, performance is always a popular topic; more specifically, how to tune for optimal performance is something a lot of people want to know more about. Given the level of interest in these workshops, as well as some common misperceptions on how and what to tune, I thought it would be a good idea to briefly cover some dos and don’ts for application server tuning.
Don’t touch that dial!
Even if you don't have firsthand memories of this phrase, you almost certainly have heard of it before. It was very common some years ago on television and radio broadcasts at the start of a commercial break. With the advent of digital tuners, we no longer use a dial to tune our TV or radios (unless you have a very old TV or radio), but this phrase is good initial guidance when tuning WebSphere Application Server. The reason for this is that, over time, as the WebSphere Application Server runtime has improved, the default sizes of the various thread and connection pools have decreased because fewer of these shared resources are required to perform the same amount (or more) of work than earlier runtime implementations.
One such example of an improvement in the WebSphere Application Server runtime is the Web container thread pool. Prior to WebSphere Application Server V6.x, a one-to-one mapping existed between the number of concurrent client connections and the threads in the Web container thread pool. In other words, if 50 clients were accessing an application, 50 threads were needed to service the requests. That changed in WebSphere Application Server V6.0 with the introduction of NIO (Native IO) and AIO (Asynchronous IO), which enabled connection management to be handled by a small number of threads, and the actual work to also be handled by a comparatively small number of threads.
During a recent customer engagement, I found that the company had mistakenly believed "bigger pool sizes equal better performance" and increased the pools from their default values. After observing the actual thread use and connection pool use in the IBM Tivoli® Performance Viewer during a test run, I was able to improve performance by over 30% by actually decreasing the size of the Web container thread pools and JDBC connection pool. Decreasing the pool sizes meant that there was less overhead for WebSphere Application Server in managing runtime resources; in this case, threads and connection objects weren’t needed, thus freeing up CPU and memory for processing application requests.
DO tune the JVM!
No, this isn’t another saying from television and radio, but it’s likely the most important tuning you can perform with WebSphere Application Server. Correctly tuning the JVM, which most often is simply sizing the JVM correctly for the workload, typically provides the biggest performance improvement of any single tuning aspect in WebSphere Application Server.
The default heap sizes in WebSphere Application Server are 50 MB for the initial heap size and 256 MB for the maximum heap size. These values likely aren’t optimal for your environment, but they’re conservative values to chosen to avoid problems with memory over commit. As a result, you’re likely to increase the size of the JVM (assuming you have adequate physical memory for the JVM).
Correctly sizing the heap requires that you enable verbosegc (verbose garbage collection statistics), run a test, and then analyze the verbosegc output to determine how to adjust the heap size. You can use IBM Pattern Modeling and Analysis Tool for Java™ Garbage Collector (PMAT) to analyze the verbosegc output or the IBM Monitoring and Diagnostic Tools for Java - Garbage Collection and Memory Visualizer, which comes with the IBM Support Assistant.
In terms of sizing the heap:
- Java heap should be sized for 40-70% of average memory use.
- Garbage collections should occur no more than 10 seconds apart. If garbage collection is occurring more than once every 10 seconds, or the heap utilization is running more than 70% of the heap, then you’ll want to consider increasing the heap size.
- A garbage collection should last no more than 1-2 seconds. If you find that garbage collection is lasting more than 1-2 seconds, then that’s a sign that the heap is too large, or (if you’re using generational garbage collection) that the “nursery” is too large. Likewise, memory use of less than 40% is a sign that the heap is too large.
- Total time spent in garbage collection should be no more than 15% of the duration of the test. Spending more than 15% in garbage collection during a test is usually the result of a combination of a couple of the other conditions.
Speaking of testing, your test length should be minimally be 10-15 minutes in length in order for the JVM to optimize the bytecode and runtime to stabilize. Tests shorter than this tend to have results skewed by the impact of container startup and thread optimizations.
Last on the subject of tuning the JVM, don’t forget to tune the JVMs for your node agent(s) and deployment manager. Since each of these WebSphere Application Server Network Deployment components is a JVM, the same tuning advice on heap sizing based on workload and garbage collection given above for application servers applies here as well. Items that can impact the workload of these JVMs includes the number of application servers, size of the cell, the frequency of configuration changes, and the size of those configuration changes (especially application deployment).
Bottom line: in order to assess the memory needed by a node agent (or deployment manager), you need to analyze the heap use and garbage collection cycles over a representative period of time, which for both of these components should also include at least one application deployment. Application deployment, especially of large EAR files, can lead to the creation of lots of objects in the deployment manager and node agent JVMs, which also provides some alternatives that minimize object creation and thus speed application deployment.
DON'T relax the WebSphere queues without testing
WebSphere Application Server deployment is a series of queues, and it’s best to only allow the amount of work into a queue for which capacity exists to perform work. This avoids overloads in any component that results in performance degradation, as was the case with the customer I cited above who had increased the Web container thread pool and connection pool beyond the point where WebSphere Application Server could work on requests.
Also follow this guidance with respect to running tests to determine the throughput curve, and carefully monitor resource usage across all components: the network, CPU on all servers, disk, and so on, in order to determine where bottlenecks exist.
One potential downside of improving the WebSphere Application Server runtime implementation for various components is that while it might have been the case previously that WebSphere Application Server presented a bottleneck in the queue network, more recent runtime improvements now result in other resources being constrained; for example, the CPU on the database server, since WebSphere Application Server is now delivering requests faster to the database server than was previously the case.
The key message here is that if you’ve tuned a prior version of WebSphere Application Server, and in doing so you made changes to the default pool sizes, you’ll likely want to revisit those changes and make sure that what improved the performance before doesn’t decrease performance in a newer version of WebSphere Application Server.
DO stay current
The WebSphere Application Server Information Center is constantly updated with the information on tuning various aspects of WebSphere Application Server (JVM, threads, connection pools, cache, and so on), as well as operating systems. Take some time to review the recommendations in the Information Center specific to your version of WebSphere Application Server. Also take some time to brush up on performance tuning methodology, which is not only covered in the Information Center, but is also covered in great detail in Performance Analysis for Java Websites, a book that remains the most complete reference on tuning methodology that I know of.
- Information Center: Monitoring performance with Tivoli Performance Viewer
- Information Center: Queue configuration best practices
- Case study: Tuning WebSphere Application Server V7 for performance
- The WebSphere Contrarian: Effectively leveraging virtualization with WebSphere Application Server
- The WebSphere Contrarian: Options for accelerating application deployment
- Best Practices for Large WebSphere Topologies
- Performance Analysis for Java Websites, Stacy Joines, Ruth Willenborg, Ken Hygh, Addison-Wesley, 2002, ISBN 0201844540
- IBM developerWorks WebSphere
Get products and technologies
- alphaWorks: IBM Pattern Modeling and Analysis Tool for Java Garbage Collector (PMAT)
- IBM Support Assistant