Performance tuning WebSphere Application Server 7 on AIX 6.1
timdp 2000003V97 Comment (1) Visits (21281)
This week I've been working with AIX 220.127.116.11 and WAS 7.0.7 Network Deployment. The cluster topology is as follows:
The aim of the game this week has been throughput, the test involves a 25 step HTTP interaction delivered by jMeter with no pauses and very little ramp up. It's not a "real world" test at all and the goal was to identify the optimum
throughput point for the application beyond which latency becomes unacceptable.
Before we started tuning, nmon was showing the CPUs as busy, but with a higer than expected proportion of system time and noticable context switching suggesting that threads were spinning waiting for resources. I must stress that the tuning below is not a replacement for performance analysis of your application to understand where programmatic improvements can be introduced to minimise locking (see page 16 of this for some app dev guidance). Further, as with any performance tuning, application of this tuning may deliver a throughput enhancement, but equally it may move the point of contention to elsewhere in the stack. In short, I'm not claiming that I have a one-size-fits-all set of magic setting that will act as the silver bullet to all your performance problems, but I wanted to share what's worked for me this week along with why I used the settings that I did. I'll also collate the information that is documented in a variety of places into a single resource for you and cover:
The WAS Info Center has plenty of information on AIX OS setup, but not much that seems to be pertinent to our issue. There is good information however, in both the AIX 6.1 Info Center and chapter 4.5 of this Redbook - you should bookmark both. The AIX Info Center has a set of recommended environment variables, and the Redbook builds on that with network settings which I confess I didn't use this week as we've not been network bound. Below is the list of environment variables from the AIX Info Center plus a few extras from some expert colleagues:
IBM HTTP Server configuration
After reading a number of Technotes and support articles, I finally decided that this Technote was the most useful resource for improving the out of the box configuration of the multi-processing module in IBM HTTP Server. As each IHS server thread has it's own copy of the WebSphere plugin, under the relentless load we were generating this seemed like a potential area of weakness wherby different plugins could have different views of the availability of the available Application Servers, so we settled on the configuration used in example 1 of the Technote which has a single server thread:
If you're running on a UNIX-based platform, you may need to have ulimit -s 512 in the session which starts IHS.
Note that using a single server in this way means that if that process exits unexpectedly you potentially lose 2000 in-flight connections.
Web server plug-in configuration
Combining the wisdom from both of these two Technotes (Understanding plugin load balancing and Recommended configuration values) I ended up making the following changes to the default plugin configuration:
<Server CloneID="14uddvvpt" ConnectTimeout="5" Exte
<Server CloneID="14udeeloa" ConnectTimeout="5" Exte