ASI Engine and Workflow Queuing in a Clustered Installation
In a cluster, performance is affected by the number of threads allocated for each queue on each cluster node. Load balancing is dependent on the number of threads and the number of steps for a business process to execute before being rescheduled and then possibly distributed to another node.
For most servers, the optimum number of threads is a relatively small multiple of the number of CPUs. At any time, though, a heavily loaded application server is processing far more tasks than the number of allowed processes. These additional tasks are kept on a queue. Each task executes a configurable number of steps and is returned to the queue. There is a separate queue for each priority (1-9). Each queue has its own resources assigned and managed through a scheduling policy (there is only one policy active at a time). This enables the management of mixed workloads.
The following graphic shows an example of this process:

- The Basic Scheduling Policy only allows static control of the number of threads allocated to each queue.
- The Fair Share Scheduling Policy is a model in which each participant (queue) is assigned a certain share of the available resources. Over time, this policy ensures that each queue receives its share of resources.
To conserve resources, business processes can be cached to disk while in the queue. There are several parameters that control exactly when and how business processes are cached, allowing memory resources to be used efficiently. A distinction is made between smaller and larger contexts, with the definition of “small” being a parameter.