It seems that in our current configuration the WebSphere plugin does not reliably stop sending requests to a Cluster member when it's been taken down. Or sometimes it seems to take an inordinate amount of time for it to do so, or sometimes it starts sending requests before a restarting member is fully operational.
A quick workaround is to manually edit the plugin's configuration file on each web server to comment-out the unwanted servers. The plugin will automatically detect and apply the changes within a minute or so and stop routing requests to those servers until you manually uncomment them.
Plugin Configuration File Location
On our AIX systems, the plugin configuration file is in a location like:
(If you're ever uncertain you're looking in the correct location, look for the WebS
Plugin Configuration File Contents
Within that file are many configuration items, most should never be manually edited, but the ones we care about for this purpose are the <PrimaryServers> elements within each <ServerCluster> element.
The ServerCluster's "Name" attribute will tell you which element to edit.
Then the PrimaryServers element in that ServerCluster will list each of the individual <Server> Cluster members which are also defined within ServerCluster.
Removing a specific server from Plugin dispatching
The only change we have to make is to comment-out the specific Server which we want to cease receiving requests. So to disable the TestApp-B server on serverB:
DougBreaux 270007SMYJ Visits (8734)
After attending a webcast on the WebSphere Application Server 6.1 Plug-in, I thought I'd summarize the content I found helpful.
The plugin is a native module (DLL or .so) that is installed into the Web Server and is offered each HTTP request to determine whether WAS should handle it. It makes this determination based on the contents of the plugin-cfg.xml file which is normally auto-generated by WAS.
I say, "normally", because it can be manually edited, but doesn't need to be for most operating scenarios. Note that many of these customizations can now be configured within the Administrative Console as well, which significantly reduces the need for manual file editing. This configuration is located under Servers -> Web Servers -> <Server Instance> -> Plug-in Properties. (Visiting this location in the Console is also a useful way to to locate the active plugin configuration file.)
The configuration file specifies virtual hosts (IP names/addresses and ports) and URL paths which are serviced by WAS applications, enabling the plugin to determine which requests it can route to WAS and which it must leave for the HTTP server to handle.
The file also contains values governing performance, maintenance, and failover, such as logging configuration and network timeouts.
Finally, it contains the "transport" information used to forward matched requests to the appropriate Web Containers in WAS. That is, the IP name and port combinations of the different Cluster members' Web Container "Transport Chains".
"Affinity" is the notion of sending a user to the same Cluster member (application server JVM) for each subsequent request after the initial one, if the application uses HTTP Sessions. This behavior is required by the JEE specification, and in WebSphere is the responsibility of the Web Server plugin. WebSphere maintains this affinity by appending a "Clone ID" or "Partition ID" to the "Session ID", which the plugin then compares against its configuration to correctly route subsequent requests.
For further detail on the Session ID mechanics, see this post.
New users, those without an existing Session, are distributed to Cluster members in either a weighted round-robin fashion or a random fashion. Round-robin is the default, and it will decrement each cluster member's "weight" each time it sends it a new request, until all members are less than zero, then they will all be reset to their starting values.
Tips or Items of Note
Server version: IBM_