Using IBM HTTP Server 6.1 and later diagnostic capabilities with WebSphere
This could be useful. Apache mod_status is available in IHS, and when it's enabled it provides information about the individual threads running under the httpd processes.
The information is provided as a simple web page via a URL (that of course can be protected by any of the various Apache mechanisms).
With the default httpd.conf file, only the following changes are required:
It seems that in our current configuration the WebSphere plugin does not reliably stop sending requests to a Cluster member when it's been taken down. Or sometimes it seems to take an inordinate amount of time for it to do so, or sometimes it starts sending requests before a restarting member is fully operational.
A quick workaround is to manually edit the plugin's configuration file on each web server to comment-out the unwanted servers. The plugin will automatically detect and apply the changes within a minute or so and stop routing requests to those servers until you manually uncomment them.
Plugin Configuration File Location
On our AIX systems, the plugin configuration file is in a location like:
(If you're ever uncertain you're looking in the correct location, look for the WebS
Plugin Configuration File Contents
Within that file are many configuration items, most should never be manually edited, but the ones we care about for this purpose are the <PrimaryServers> elements within each <ServerCluster> element.
The ServerCluster's "Name" attribute will tell you which element to edit.
Then the PrimaryServers element in that ServerCluster will list each of the individual <Server> Cluster members which are also defined within ServerCluster.
Removing a specific server from Plugin dispatching
The only change we have to make is to comment-out the specific Server which we want to cease receiving requests. So to disable the TestApp-B server on serverB:
DougBreaux 270007SMYJ Visits (10035)
After attending a webcast on the WebSphere Application Server 6.1 Plug-in, I thought I'd summarize the content I found helpful.
The plugin is a native module (DLL or .so) that is installed into the Web Server and is offered each HTTP request to determine whether WAS should handle it. It makes this determination based on the contents of the plugin-cfg.xml file which is normally auto-generated by WAS.
I say, "normally", because it can be manually edited, but doesn't need to be for most operating scenarios. Note that many of these customizations can now be configured within the Administrative Console as well, which significantly reduces the need for manual file editing. This configuration is located under Servers -> Web Servers -> <Server Instance> -> Plug-in Properties. (Visiting this location in the Console is also a useful way to to locate the active plugin configuration file.)
The configuration file specifies virtual hosts (IP names/addresses and ports) and URL paths which are serviced by WAS applications, enabling the plugin to determine which requests it can route to WAS and which it must leave for the HTTP server to handle.
The file also contains values governing performance, maintenance, and failover, such as logging configuration and network timeouts.
Finally, it contains the "transport" information used to forward matched requests to the appropriate Web Containers in WAS. That is, the IP name and port combinations of the different Cluster members' Web Container "Transport Chains".
"Affinity" is the notion of sending a user to the same Cluster member (application server JVM) for each subsequent request after the initial one, if the application uses HTTP Sessions. This behavior is required by the JEE specification, and in WebSphere is the responsibility of the Web Server plugin. WebSphere maintains this affinity by appending a "Clone ID" or "Partition ID" to the "Session ID", which the plugin then compares against its configuration to correctly route subsequent requests.
For further detail on the Session ID mechanics, see this post.
New users, those without an existing Session, are distributed to Cluster members in either a weighted round-robin fashion or a random fashion. Round-robin is the default, and it will decrement each cluster member's "weight" each time it sends it a new request, until all members are less than zero, then they will all be reset to their starting values.
Tips or Items of Note
During investigation of some intermittent problems in one of our web applications, we observed some JSESSIONID cookie behavior that we couldn't explain, so I performed some investigation into the cookie's mechanics. I'll attempt to summarize in this article what I learned.
JSESSIONID Cookie Format
In a clustered environment, the JSESSIONID cookie is composed of the core Session ID and a few other components. Here's an example:
It's still unclear to me what the different values mean here, but searching our Proxy logs indicates that the vast majority of our Sessions, the Cache ID is 0001. For instance, a search of one day's log around 17:00 indicates the following number of cookies which contained a particular Cache ID:
For a particular Session ID, the Cache ID can definitely change mid-session, without any other changes. In particular, without the Clone/Partition ID changing. This does not indicate a switch to another Cluster member, but I don't know exactly what it does indicate. Based on the relative loads of our various systems, it seems possible that changing Cache IDs only occurs under heavy loads.
A Partition ID is appended to the cookie if memory-to-memory replication in peer-to-peer mode is utilized for Distributed Session management. Otherwise, a Clone ID is appended.
This will match one of the CloneID attributes in the Server elements within the web server's plugin-cfg.xml file. For instance 138888kcd in:
<Server CloneID="138888kcd" ConnectTimeout="10" Exte
If you use memory-to-memory Session replication, your cookies will contain Partition IDs rather than Clone IDs.
Partition IDs are similar in function to Clone IDs, but best I can tell there is no direct way to determine which values correspond to which cluster members. They are internally mapped by the WebSphere HAManager to specific Clone IDs, and that mapping is exchanged with the web server plug-in so that it can maintain Session affinity and find additional cluster members for failover. (The exchange takes place in private headers on each response from WAS back to the plug-in, and those headers are removed before the response is sent back to the client.)
Session Affinity and Failover
The Clone/Partition ID corresponds to whichever cluster member creates the Session, and the plug-in is responsible to send that Session to the same cluster member as long as it is available. From the Scalability Redbook:
Since the Servlet 2.3 specification, as implemented by WebSphere Application Server V5.0 and higher, only a single cluster member may control/access a given Session at a time. After a Session has been created, all following requests need to go to the same application server that created the Session. This Session affinity is provided by the plug-in.
If on a subsequent request the specified cluster member is unavailable, the plug-in will choose another cluster member and attempt to connect to that. If Distributed Sessions are configured, via database persistence or memory-to-memory replication, the Session will be resumed in-progress on that new member. If not, a new Session will be created and the user's progress will be lost.
If a new cluster member is able to resume the existing Session, it will append its own Clone/Partition ID to the existing JSESSIONID cookie. For instance:
Now the plug-in knows that 2 different cluster members could potentially service this Session. If the original member becomes available again, the Session will switch back to it.
Finally, note that according to the System Management Redbook:
WebSphere provides session affinity on a best-effort basis. There are narrow windows where session affinity fails. These windows are:
Referenced articles and Redbooks