Performance tuning for interactive batch jobs
LSF is often used on systems that support both interactive and batch users. On one hand, users are often concerned that load sharing will overload their workstations and slow down their interactive tasks. On the other hand, some users want to dedicate some machines for critical batch jobs so that they have guaranteed resources. Even if all your workload is batch jobs, you still want to reduce resource contentions and operating system overhead to maximize the use of your resources.
Numerous parameters can be used to control your resource allocation and to avoid undesirable contention.
Types of load conditions
Since interferences are often reflected from the load indices, LSF responds to load changes to avoid or reduce contentions. LSF can take actions on jobs to reduce interference before or after jobs are started. These actions are triggered by different load conditions. Most of the conditions can be configured at both the queue level and at the host level. Conditions defined at the queue level apply to all hosts used by the queue, while conditions defined at the host level apply to all queues using the host.
Scheduling conditions
These conditions, if met, trigger the start of more jobs. The scheduling conditions are defined in terms of load thresholds or resource requirements.
At the queue level, scheduling conditions are configured as either resource requirements or scheduling load thresholds, as described in lsb.queues. At the host level, the scheduling conditions are defined as scheduling load thresholds, as described in lsb.hosts.
Suspending conditions
These conditions affect running jobs. When these conditions are met, a SUSPEND action is performed to a running job.
At the queue level, suspending conditions are defined as STOP_COND as described in lsb.queues or as suspending load threshold. At the host level, suspending conditions are defined as stop load threshold as described in lsb.hosts.
Resuming conditions
These conditions determine when a suspended job can be resumed. When these conditions are met, a RESUME action is performed on a suspended job.
At the queue level, resume conditions are defined as by RESUME_COND in lsb.queues, or by the loadSched thresholds for the queue if RESUME_COND is not defined.
Types of load indices
To effectively reduce interference between jobs, correct load indices should be used properly. Below are examples of a few frequently used parameters.
Paging rate (pg)
The paging rate (pg) load index relates strongly to the perceived interactive performance. If a host is paging applications to disk, the user interface feels very slow.
The paging rate is also a reflection of a shortage of physical memory. When an application is being paged in and out frequently, the system is spending a lot of time performing overhead, resulting in reduced performance.
The paging rate load index can be used as a threshold to either stop sending more jobs to the host, or to suspend an already running batch job to give priority to interactive users.
This parameter can be used in different configuration files to achieve different purposes. By defining paging rate threshold in lsf.cluster.cluster_name, the host will become busy from LIM’s point of view; therefore, no more jobs will be advised by LIM to run on this host.
By including paging rate in queue or host scheduling conditions, jobs can be prevented from starting on machines with a heavy paging rate, or can be suspended or even killed if they are interfering with the interactive user on the console.
A job suspended due to pg threshold will not be resumed even if the resume conditions are met unless the machine is interactively idle for more than PG_SUSP_IT seconds.
Interactive idle time (it)
Strict control can be achieved using the idle time (it) index. This index measures the number of minutes since any interactive terminal activity. Interactive terminals include hard wired ttys and rlogin sessions, and X shell windows such as xterm. On some hosts, LIM also detects mouse and keyboard activity.
This index is typically used to prevent batch jobs from interfering with interactive activities. By defining the suspending condition in the queue as it<1 && pg>50, a job from this queue will be suspended if the machine is not interactively idle and the paging rate is higher than 50 pages per second. Furthermore, by defining the resuming condition as it>5 && pg<10 in the queue, a suspended job from the queue will not resume unless it has been idle for at least five minutes and the paging rate is less than ten pages per second.
The it index is only non-zero if no interactive users are active. Setting the it threshold to five minutes allows a reasonable amount of think time for interactive users, while making the machine available for load sharing, if the users are logged in but absent.
For lower priority batch queues, it is appropriate to set an it suspending threshold of two minutes and scheduling threshold of ten minutes in the lsb.queues file. Jobs in these queues are suspended while the execution host is in use, and resume after the host has been idle for a longer period. For hosts where all batch jobs, no matter how important, should be suspended, set a per-host suspending threshold in the lsb.hosts file.
CPU run queue length (r15s, r1m, r15m)
Running more than one CPU-bound process on a machine (or more than one process per CPU for multiprocessors) can reduce the total throughput because of operating system overhead, as well as interfering with interactive users. Some tasks such as compiling can create more than one CPU-intensive task.
Queues should normally set CPU run queue scheduling thresholds below 1.0, so that hosts already running compute-bound jobs are left alone. LSF scales the run queue thresholds for multiprocessor hosts by using the effective run queue lengths, so multiprocessors automatically run one job per processor in this case.
For short to medium-length jobs, the r1m index should be used. For longer jobs, you might want to add an r15m threshold. An exception to this are high priority queues, where turnaround time is more important than total throughput. For high priority queues, an r1m scheduling threshold of 2.0 is appropriate.
CPU utilization (ut)
The ut parameter measures the amount of CPU time being used. When all the CPU time on a host is in use, there is little to gain from sending another job to that host unless the host is much more powerful than others on the network. A ut threshold of 90% prevents jobs from going to a host where the CPU does not have spare processing cycles.
If a host has very high pg but low ut, then it may be desirable to suspend some jobs to reduce the contention.
Some commands report ut percentage as a number from 0-100, some report it as a decimal number between 0-1. The configuration parameter in the lsf.cluster.cluster_name file, the configuration files, and the bsub -R resource requirement string take a fraction in the range from 0 to 1.
The command bhist shows the execution history of batch jobs, including the time spent waiting in queues or suspended because of system load.
The command bjobs -p shows why a job is pending.
Scheduling conditions and resource thresholds
Three parameters, RES_REQ, STOP_COND and RESUME_COND, can be specified in the definition of a queue. Scheduling conditions are a more general way for specifying job dispatching conditions at the queue level. These parameters take resource requirement strings as values which allows you to specify conditions in a more flexible manner than using the loadSched or loadStop thresholds.