Tuning and Configuring in a Clustered Installation
You can tune and configure a cluster to improve performance through the adapters, threads, and nodes.
Please refer to the Performance Management Guide to tune engine, database and other settings. This section describes performance tuning related to clustering.
- The location and configuration of adapters.
- The number of threads allocated for each queue on each node.
- The number of steps for a business process to execute before being rescheduled (and possibly distributed to another node).
The location and number of adapters is important because using adapters that cannot be clustered forces all activity for a particular adapter to go through one node. Failure to use an effective network load balancing technology in front of HTTP, FTP and other network-oriented adapters can also force activity to one node, because activity may be concentrated on one IP address.
The number of threads, used in combination with the execution cycle, places proportionately more workload from specific queues onto specific nodes.
An effective way to visualize distribution is to imagine that each queue on each node is a tank and the business processes in the queue are a liquid partially filling the tanks, as shown in the following graphic:
The percentage load difference that must exist between two nodes before load balancing occurs. At 100 (percent), no load balancing will occur. At 1 (percent), the load will tend to move back and forth between the nodes with temporary imbalances. The correct number for this parameter is best determined by benchmarking with your workload, but generally, between 20 percent and 40 percent works well for many conditions. Other characteristics of your workload will have a much more significant impact on your scalability than tuning this parameter.
Determines if business processes waiting in the queue will be rescheduled if they meet certain criteria. This gives them another opportunity to be distributed.
The length of the execution cycle (controlling the turnover rate), thread configurations, and the particular services and amount of data involved all have an impact on how smoothly work is distributed. For some workloads, the performance will be almost linear, approximately doubling the performance by doubling the number of nodes. For other workloads, there will be little improvement, especially if a single non-clusterable adapter is the bottleneck.