There are two kinds of WLB:
1) Connection based WLB:
In summary this is based on routing new connections to the servers with the lowest load.
A list of information about servers is maintained.
This is updated regularly and when this is done a coordinating member will construct the server list.
Active members return their load information to the coordinating member. This includes Hostname, port number, CPU load, memory load.
The coordinating member then sends the server list to the other members.
On each server it's Weight is calculated by an algorithm and based on this server's load information and the total number of servers.
Higher weight means there is lower workload on this machine so send more workload to it.
The % workload being handled by a server is approximated from the number of connections the server is currently serving from the total number the entire cluster is serving.
% of workload to be sent to member = this member weight / (total of other member weights).
New connections are sent to servers where the "% workload being handled" is under the "% of workload to be sent to member"
2) Transaction based WLB.
This works in a similar way to the above and involves the server list.
Because we are not dealing purely with new connections as above, existing connections need to be actively rerouted to different members to rebalance the workload.
This works as follows.
A transport pool is maintained on each member, each connection can be moved from one member to an other (by disassociation from a transport on the first server and association to an transport on the second server).
After every 8 transactions or 2 seconds whichever comes first, each server will attempt to re-balance workloads by moving the logical connections if necessary.
WLB for purescale involving j2ee is configured in the j2ee driver file.
db2pd -serverlist shows the currently cached serverlist on this member (note priority and weight are synonymous).
pureScale on Linux
Matching: workload X
CiaranDeB 2700033FRG Tags:  appliance purescale database transactional puredata workload oltp 2,085 Views
As with all of the pureSystem family the use of patterns to automate repeatable tasks is a feature of pureData systems.
With pureData here are two types of patterns available:
1) Topology Patterns. Topology patterns will install all of the software required to run pureScale and create the pureScale instance on a number of compute nodes. You can deploy a topology pattern of 2, 4, or 6 nodes. The 2 node topology pattern consists of an instance of 2 members with a cluster caching facilities (CFs) co-located on each of two compute nodes. This is the smallest practical HA installation of Db2. The 4 nodes topology pattern consists of 2 members and 2 CFs on separate compute nodes. The 6 node topology pattern is a 4 member cluster with 2 CFs on separate compute nodes. Of course the more nodes you deploy to the higher the performance and resilience.
2) Database Patterns. A database pattern is essentially a method of storing and reusing database configuration settings. Databases patterns are used to create and configure a database within the instance topology. PureData systems will come with an IBM transaction processing database pattern. You can also "roll your own".