bqueues

Displays information about queues.

Synopsis

bqueues [-w | -l | -r | -o "[field_name | all] [:[-][output_width]] ... [delimiter='character']" [-json]] [-m host_name | -m host_group | -m cluster_name | -m all] [-u user_name |-uuser_group | -u all] [queue_name ...] [-alloc]
bqueues [-w | -o "[field_name | all] [:[-][output_width]] ... [delimiter='character']" ] [-m host_name | -m host_group | -m cluster_name | -m all] [-u user_name |-u user_group | -u all] [queue_name ...] [-alloc] [-noheader]
bqueues [-h | -V]

Description

By default, returns the following information about all queues: queue name, queue priority, queue status, task statistics, and job state statistics.

When a resizable job has a resize allocation request, bqueues displays pending requests. When LSF adds more resources to a running resizable job, bqueues decreases job PEND counts and displays the added resources. When LSF removes resources from a running resizable job, bqueues displays the updated resources.

In LSF multicluster capability, returns the information about all queues in the local cluster.

Returns job slot statistics if the -alloc option is used.

Batch queue names and characteristics are set up by the LSF administrator in the lsb.queues file.

CPU time is normalized.

CPU time output is not consistent with the bacct command

The bacct command displays the sum of CPU time that is used by all past jobs in event files. If you specify a begin and end time, the execution host type and run time are also considered in the CPU time. For a specified job, the bacct and bhist commands have the same result.

Because the value of CPU time for the bqueues command is used by the mbatchd daemon to calculate fair share priority, it does not display the actual CPU time for the queue. Normalized CPU time results in a different CPU time output in the bacct and bqueues commands.

Options

-alloc
Shows counters for slots in RUN, SSUSP, USUSP, and RSV. The slot allocation is different depending on whether the job is an exclusive job or not.
-json
Displays the customized output in JSON format.

When specified, bqueues -o displays the customized output in the JSON format.

This option only applies to output for the bqueues -o command for customized output. This has no effect when running bqueues without the -o option and the LSB_BQUEUES_FORMAT environment variable or parameter are not defined.

-l
Displays queue information in a long multiline format. The -l option displays the following additional information:
  • Queue description
  • Queue characteristics and statistics
  • Scheduling parameters
  • Resource usage limits
  • Scheduling policies
  • Users
  • Hosts
  • Associated commands
  • Dispatch and run windows
  • Success exit values
  • Host limits per parallel job
  • Pending time limits and eligible pending time limits
  • Job controls
  • User shares
  • Normalized fair share factors
  • Containers

If you specified an administrator comment with the -C option of the queue control commands (qclose, qopen, qact, and qinact), qhist displays the comment text.

Displays absolute priority scheduling (APS) information for queues that are configured with the APS_PRIORITY parameter.

-noheader
Removes the column headings from the output.

When specified, bqueues displays the values of the fields without displaying the names of the fields. This is useful for script parsing, when column headings are not necessary.

This option applies to output for the bqueues command with no options, and to output for all bqueues options with output that uses column headings, including the following: -alloc, -m, -o, -u, -w.

This option does not apply to output for bqueues options that do not use column headings, including the following: -json, -l, -r.

-o
Sets the customized output format.
  • Specify which bqueues fields (or aliases instead of the full field names), in which order, and with what width to display.
  • Specify only the bqueues field name or alias to set its output to unlimited width and left justification.
  • (Available starting in Fix Pack 14) Specify all to display all fields. Specify the colon (:) with an output width that applies to all fields.
  • Specify the colon (:) without a width to set the output width to the supported width for that field.
  • Specify the colon (:) with a width to set the maximum number of characters to display for the field. When its value exceeds this width, bqueues truncates the ending characters.
  • Specify a hyphen (-) to set right justification when bqueues displays the output for the specific field. If not specified, the default is to set left justification when bqueues displays output for a field.
  • Use delimiter= to set the delimiting character to display between different headers and fields. This delimiter must be a single character. By default, the delimiter is a space.
Output customization applies only to the output for certain bqueues options:
  • LSB_BQUEUES_FORMAT and bqueues -o both apply to output for the bqueues command with no options, and for bqueues options with output that filter information, including the following options: -alloc, -m, -u.
  • LSB_BQUEUES_FORMAT and bqueues -o do not apply to output for bqueues options that use a modified format, including the following options: -l, -r, -w.

The bqueues -o option overrides the LSB_BQUEUES_FORMAT environment variable, which overrides the LSB_BQUEUES_FORMAT setting in lsf.conf.

This table outlines the bqueues fields to display, and their supported width, aliases you can use instead of field names, and units of measurement for the displayed field:

Table 1. Output fields for bqueues
Field name Width Aliases Unit
queue_name 15 qname  
description 50 desc  
priority 10 prio  
status 12 stat  
max 10    
jl_u 10 jlu  
jl_p 10 jlp  
jl_h 10 jlh  
njobs 10    
pend 10    
run 10    
susp 10    
rsv 10    
ususp 10    
ssusp 10    
nice 6    
max_corelimit 8 corelimit  
max_cpulimit 30 cpulimit  
default_cpulimit 30 def_cpulimit  
max_datalimit 8 datalimit  
default_datalimit 8 def_datalimit  
max_filelimit 8 filelimit  
max_memlimit 8 memlimit  
default_memlimit 8 def_memlimit  
max_processlimit 8 processlimit  
default_processlimit 8 def_processlimit  
max_runlimit 12 runlimit  
default_runlimit 12 def_runlimit  
max_stacklimit 8 stacklimit  
max_swaplimit 8 swaplimit  
max_tasklimit 6 tasklimit  
min_tasklimit 6    
default_tasklimit 6 def_tasklimit  
max_threadlimit 6 threadlimit  
default_threadlimit 6 def_threadlimit  
res_req 20    
hosts 50    
all (Available starting in Fix Pack 14) Specify an output width that applies to all fields    
Note: The following resource limit field names are supported, but show the same content as their corresponding maximum resource limit fields (that is, the following resource limit field names are aliases): corelimit, cpulimit, datalimit, filelimit, memlimit, processlimit, runlimit, stacklimit, swaplimit, tasklimit, threadlimit.

For example, corelimit is the same as max_corelimit.

Field names and aliases are not case-sensitive. Valid values for the output width are any positive integer from 1 to 4096.

For example: bqueues -o "queue_name description:10 priority:- status: max:-6 delimiter='^'"

This command displays the following fields:
  • QUEUE_NAME with unlimited width and left justified.
  • DESCRIPTION with a maximum width of ten characters and left justified.
  • PRIORITY with a maximum width of ten characters (which is the recommended width) and right justified.
  • STATUS with a maximum width of 12 characters (which is the recommended width) and left justified.
  • MAX with a maximum width of six characters and right justified.
  • The ^ character is displayed between different headers and fields.
-r
Displays the same information as the -l option. In addition, if fair share is defined for the queue, displays recursively the share account tree of the fair share queue. When queue-based fair share is used along with the bsub -G command and the LSB_SACCT_ONE_UG=Y parameter in the lsf.conf file, share accounts are only created for active users and for the default user group (if defined).

Displays the global fair share policy name for the participating queue. Displays remote share load (REMOTE_LOAD column) for each share account in the queue.

Displays the normalized fair share factor, if it is not zero.

-w
Displays queue information in a wide format. Fields are displayed without truncation.
-m host_name | -m host_group | -m cluster_name | -m all
Displays the queues that can run jobs on the specified host. If the keyword all is specified, displays the queues that can run jobs on all hosts.

If a host group is specified, displays the queues that include that group in their configuration. For a list of host groups, use the bmgroup command.

In LSF multicluster capability, if the all keyword is specified, displays the queues that can run jobs on all hosts in the local cluster. If a cluster name is specified, displays all queues in the specified cluster.

-u user_name | -u user_group | -u all

Displays the queues that can accept jobs from the specified user. If the keyword all is specified, displays the queues that can accept jobs from all users.

If a user group is specified, displays the queues that include that group in their configuration. For a list of user groups, use the bugroup command.

queue_name ...

Displays information about the specified queues.

-h

Prints command usage to stderr and exits.

-V

Prints LSF release version to stderr and exits.

Default Output

Displays the following fields:

QUEUE_NAME
The name of the queue. Queues are named to correspond to the type of jobs that are usually submitted to them, or to the type of services they provide.
lost_and_found
If the LSF administrator removes queues from the system, LSF creates a queue that is called lost_and_found and places the jobs from the removed queues into the lost_and_found queue. Jobs in the lost_and_found queue are not started unless they are switched to other queues the bswitch command.
PRIO
The priority of the queue. The larger the value, the higher the priority. If job priority is not configured, determines the queue search order at job dispatch, suspend, and resume time. Contrary to usual order of UNIX process priority, jobs from higher priority queues are dispatched first and jobs from lower priority queues are suspended first when hosts are overloaded.
STATUS
The status of the queue. The following values are supported:
Open
The queue can accept jobs.
Closed
The queue cannot accept jobs.
Active
Jobs in the queue can be started.
Inactive
Jobs in the queue cannot be started.

At any moment, each queue is in either Open or Closed state, and is in either Active or Inactive state. The queue can be opened, closed, inactivated, and reactivated with the badmin command.

Jobs that are submitted to a queue that is later closed are still dispatched while the queue is active. The queue can also become inactive when either its dispatch window is closed or its run window is closed. In this case, the queue cannot be activated by using badmin. The queue is reactivated by LSF when one of its dispatch windows and one of its run windows are open again. The initial state of a queue at LSF startup Open, and either Active or Inactive depending on its dispatch windows.

MAX
The maximum number of job slots that can be used by the jobs from the queue. These job slots are used by dispatched jobs that are not yet finished, and by pending jobs that reserve slots.

A sequential job uses one job slot when it is dispatched to a host, while a parallel job uses as many job slots as is required by bsub -n command when it is dispatched. A dash (-) indicates no limit.

JL/U
The maximum number of job slots each user can use for jobs in the queue. These job slots are used by your dispatched jobs that are not yet finished, and by pending jobs that reserve slots. A dash (-) indicates no limit.
JL/P
The maximum number of job slots a processor can process from the queue. This number includes job slots of dispatched jobs that are not yet finished, and job slots reserved for some pending jobs. The job slot limit per processor controls the number of jobs that are sent to each host. This limit is configured per processor so that multiprocessor hosts are automatically allowed to run more jobs. A dash (-) indicates no limit.
JL/H
The maximum number of job slots a host can allocate from this queue. This number includes the job slots of dispatched jobs that are not yet finished, and slots that are reserved for some pending jobs. The job slot limit per host (JL/H) controls the number of jobs that are sent to each host, regardless of whether a host is a uniprocessor host or a multiprocessor host. A dash (-) indicates no limit.
NJOBS
The total number of slots for jobs in the queue. This number includes slots for pending, running, and suspended jobs. Batch job states are described in the bjobs command.

If the -alloc option is used, the total is the sum of the RUN, SSUSP, USUSP, and RSV counters.

PEND
The total number of tasks for all pending jobs in the queue. If used with the -alloc option, total is zero.
RUN
The total number of tasks for all running jobs in the queue. If the -alloc option is used, the total is allocated slots for the jobs in the queue.
SUSP
The total number of tasks for all suspended jobs in the queue.
PJOBS
The total number of pending jobs (including both PEND and PSUSP job) in this queue

Long Output (-l)

In addition to the default fields, the -l option displays the following fields:
Description
A description of the typical use of the queue.
Default queue indication
Indicates the default queue.
PARAMETERS/ STATISTICS
NICE
The UNIX nice value at which jobs in the queue are run. The nice value reduces process priority.
STATUS
Inactive
The long format for the -l option gives the possible reasons for a queue to be inactive:
Inact_Win
The queue is out of its dispatch window or its run window.
Inact_Adm
The queue is inactivated by the LSF administrator.
SSUSP
The number of tasks for all jobs in the queue that are suspended by LSF because of load levels or run windows. If -alloc is used, the total is the allocated slots for the jobs in the queue.
USUSP
The number of tasks for all jobs in the queue that are suspended by the job submitter or by the LSF administrator. If -alloc is used, the total is the allocated slots for the jobs in the queue.
RSV
For pending jobs in the queue, the number of tasks that LSF reserves slots for. If -alloc is used, the total is the allocated slots for the jobs in the queue.
Migration threshold
The length of time in seconds that a job that is dispatched from the queue remains suspended by the system before LSF attempts to migrate the job to another host. See the MIG parameter in the lsb.queues and lsb.hosts files.
Schedule delay for a new job
The delay time in seconds for scheduling after a new job is submitted. If the schedule delay time is zero, a new scheduling session is started as soon as the job is submitted to the queue. See the NEW_JOB_SCHED_DELAY parameter in the lsb.queues file.
Interval for a host to accept two jobs
The length of time in seconds to wait after a job is dispatched to a host and before a second job is dispatched to the same host. If the job accept interval is zero, a host can accept more than one job in each dispatching interval. See the JOB_ACCEPT_INTERVAL parameter in the lsb.queues and lsb.params files.
RESOURCE LIMITS
The hard resource usage limits that are imposed on the jobs in the queue (see getrlimit and the lsb.queues file). These limits are imposed on a per-job and a per-process basis.
The following per-job limits are supported:
CPULIMIT
The maximum CPU time a job can use, in minutes, relative to the CPU factor of the named host. CPULIMIT is scaled by the CPU factor of the execution host so that jobs are allowed more time on slower hosts.

When the job-level CPULIMIT is reached, a SIGXCPU signal is sent to all processes that belong to the job. If the job has no signal handler for SIGXCPU, the job is killed immediately. If the SIGXCPU signal is handled, blocked, or ignored by the application, then after the grace period expires, LSF sends SIGINT, SIGTERM, and SIGKILL signals to the job to kill it.

TASKLIMIT
The maximum number of tasks that are allocated to a job. Jobs that have fewer tasks than the minimum TASKLIMIT or more tasks than the maximum TASKLIMIT are rejected. Maximum tasks that are requested cannot be less than the minimum TASKLIMIT, and minimum tasks that are requested cannot be more than the maximum TASKLIMIT.
MEMLIMIT
The maximum running set size (RSS) of a process. If a process uses more memory than the limit allows, its priority is reduced so that other processes are more likely to be paged in to available memory. This limit is enforced by the setrlimit system call if it supports the RLIMIT_RSS option.

By default, the limit is shown in KB. Use the LSF_UNIT_FOR_LIMITS parameter in the lsf.conf file to specify a larger unit for display (MB, GB, TB, PB, or EB).

SWAPLIMIT
The swap space limit that a job can use. If SWAPLIMIT is reached, the system sends the following signals in sequence to all processes in the job: SIGINT, SIGTERM, and SIGKILL.

By default, the limit is shown in KB. Use the LSF_UNIT_FOR_LIMITS parameter in the lsf.conf file to specify a larger unit for display (MB, GB, TB, PB, or EB).

PROCESSLIMIT
The maximum number of concurrent processes that are allocated to a job. If PROCESSLIMIT is reached, the system sends the following signals in sequence to all processes that belong to the job: SIGINT, SIGTERM, and SIGKILL.
THREADLIMIT
The maximum number of concurrent threads that are allocated to a job. If THREADLIMIT is reached, the system sends the following signals in sequence to all processes that belong to the job: SIGINT, SIGTERM, and SIGKILL.
RUNLIMIT
The maximum wall clock time a process can use, in minutes. RUNLIMIT is scaled by the CPU factor of the execution host. When a job is in RUN state for a total of RUNLIMIT minutes, LSF sends a SIGUSR2 signal to the job. If the job does not exit within 10 minutes, LSF sends a SIGKILL signal to kill the job.
FILELIMIT
The maximum file size a process can create, in KB. This limit is enforced by the UNIX setrlimit system call if it supports the RLIMIT_FSIZE option, or the ulimit system call if it supports the UL_SETFSIZE option.
DATALIMIT
The maximum size of the data segment of a process, in KB. The data limit restricts the amount of memory a process can allocate. DATALIMIT is enforced by the setrlimit system call if it supports the RLIMIT_DATA option, and unsupported otherwise.
STACKLIMIT
The maximum size of the stack segment of a process. This limit restricts the amount of memory a process can use for local variables or recursive function calls. STACKLIMIT is enforced by the setrlimit system call if it supports the RLIMIT_STACK option.

By default, the limit is shown in KB. Use the LSF_UNIT_FOR_LIMITS parameter in the lsf.conf file to specify a larger unit for display (MB, GB, TB, PB, or EB).

CORELIMIT
The maximum size of a core file. This limit is enforced by the setrlimit system call if it supports the RLIMIT_CORE option.

If a job submitted to the queue specifies any of these limits, then the lower of the corresponding job limits and queue limits are used for the job.

If no resource limit is specified, the resource is assumed to be unlimited.

By default, the limit is shown in KB. Use the LSF_UNIT_FOR_LIMITS parameter in the lsf.conf file to specify a larger unit for display (MB, GB, TB, PB, or EB).

HOSTLIMIT_PER_JOB
The maximum number of hosts that a job in this queue can use. LSF verifies the host limit during the allocation phase of scheduling. If the number of hosts that are requested for a parallel job exceeds this limit and LSF cannot satisfy the minimum number of request slots, the parallel job pends.
SCHEDULING PARAMETERS
The scheduling and suspending thresholds for the queue.

The scheduling threshold loadSched and the suspending threshold loadStop are used to control batch job dispatch, suspension, and resumption. The queue thresholds are used in combination with the thresholds that are defined for hosts. If both queue level and host level thresholds are configured, the most restrictive thresholds are applied.

The loadSched and loadStop thresholds have the following fields:
r15s
The 15 second exponentially averaged effective CPU run queue length.
r1m
The 1 minute exponentially averaged effective CPU run queue length.
r15m
The 15 minute exponentially averaged effective CPU run queue length.
ut
The CPU usage exponentially averaged over the last minute, expressed as a percentage between 0 and 1.
pg
The memory paging rate exponentially averaged over the last minute, in pages per second.
io
The disk I/O rate exponentially averaged over the last minute, in KB per second.
ls
The number of current login users.
it
On UNIX, the idle time of the host (keyboard has not been touched on all logged in sessions), in minutes.

On Windows, the it index is based on the time a screen saver becomes active on a particular host.

tmp
The amount of free space in /tmp, in MB.
swp
The amount of currently available swap space. By default, swap space is shown in MB. Use the LSF_UNIT_FOR_LIMITS in lsf.conf to specify a different unit for display (KB, MB, GB, TB, PB, or EB).
mem
The amount of currently available memory. By default, memory is shown in MB. Use the LSF_UNIT_FOR_LIMITS in lsf.conf to specify a different unit for display (KB, MB, GB, TB, PB, or EB).
cpuspeed
The speed of each individual cpu, in megahertz (MHz).
bandwidth
The maximum bandwidth requirement, in megabits per second (Mbps).

In addition to these internal indices, external indices are also displayed if they are defined in lsb.queues (see lsb.queues(5)).

The loadSched threshold values specify the job dispatch thresholds for the corresponding load indices. If a dash (-) is displayed as the value, it means that the threshold is not applicable. Jobs in the queue might be dispatched to a host if the values of all the load indices of the host are within the corresponding thresholds of the queue and the host. Load indices can be below or above the threshold, depending on the meaning of the load index. The same conditions are used to resume jobs that are dispatched from the queue that are suspended on this host.

Similarly, the loadStop threshold values specify the thresholds for job suspension. If any of the load index values on a host go beyond the corresponding threshold of the queue, jobs in the queue are suspended.

JOB EXCEPTION PARAMETERS
Configured job exception thresholds and number of jobs in each exception state for the queue.
Threshold and NumOfJobs have the following fields:
overrun
Configured threshold in minutes for overrun jobs, and the number of jobs in the queue that triggered an overrun job exception by running longer than the overrun threshold.
underrun
Configured threshold in minutes for underrun jobs, and the number of jobs in the queue that triggered an underrun job exception by finishing sooner than the underrun threshold.
idle
Configured threshold (CPU time/runtime) for idle jobs, and the number of jobs in the queue that triggered an overrun job exception by having a job idle factor less than the threshold.
SCHEDULING POLICIES
Scheduling policies of the queue. Optionally, one or more of the following policies can be configured in the lsb.queues file:
APS_PRIORITY
Absolute Priority Scheduling is enabled. Pending jobs in the queue are ordered according to the calculated APS value.
FAIRSHARE
Queue-level fair share scheduling is enabled. Jobs in this queue are scheduled based on a fair share policy instead of the first-come, first-served (FCFS) policy.
BACKFILL
A job in a backfill queue can use the slots that are reserved by other jobs if the job can run to completion before the slot-reserving jobs start.

Backfilling does not occur on queue limits and user limit but only on host-based limits. That is, backfilling is only supported when MXJ, JL/U, JL/P, PJOB_LIMIT, and HJOB_LIMIT limits are reached. Backfilling is not supported when MAX_JOBS, QJOB_LIMIT, and UJOB_LIMIT are reached.

IGNORE_DEADLINE
If the IGNORE_DEADLINE=Y parameter is set in the queue, starts all jobs regardless of the run limit.
EXCLUSIVE
Jobs that are dispatched from an exclusive queue can run exclusively on a host if the user so specifies at job submission time. Exclusive execution means that the job is sent to a host with no other running batch jobs. No further jobs are dispatched to that host while the job is running. The default is not to allow exclusive jobs.
NO_INTERACTIVE
This queue does not accept batch interactive jobs that are submitted with the -I, -Is, and -Ip options of the bsub command. The default is to accept both interactive and non-interactive jobs.
ONLY_INTERACTIVE
This queue accepts only batch interactive jobs. Jobs must be submitted with the -I, -Is, and -Ip options of the bsub command. The default is to accept both interactive and non-interactive jobs.
SLA_GUARANTEES_IGNORE
This queue is allowed to ignore SLA resource guarantees when scheduling jobs.
FAIRSHARE_QUEUES
Lists queues that participate in cross-queue fair share. The first queue that is listed is the parent queue, which is the queue where fair share is configured. All other queues that are listed inherit the fair share policy from the parent queue. Fair share information applies to all the jobs that are running in all the queues in the fair share tree.
QUEUE_GROUP
Lists queues that participate in an absolute priority scheduling (APS) queue group.

If both the FAIRSHARE and APS_PRIORITY parameters are enabled in the same queue, the FAIRSHARE_QUEUES are not displayed. These queues are instead displayed as QUEUE_GROUP.

DISPATCH_ORDER
The DISPATCH_ORDER=QUEUE parameter is set in the parent queue. Jobs from this queue are dispatched according to the order of queue priorities first, then user fair share priority. Within the queue, dispatch order is based on user share quota. Share quotas avoid job dispatch from low-priority queues for users with higher fair share priority.
USER_SHARES
A list of [user_name, share] pairs. The user_name is either a user name or a user group name. The share is the number of shares of resources that are assigned to the user or user group. A consumer receives a portion of the resources proportional to that consumer's share that is divided by the sum of the shares of all consumers that are specified in the queue.
DEFAULT HOST SPECIFICATION
The default host or host model that is used to normalize the CPU time limit of all jobs.

Use the lsinfo command to view a list of the CPU factors that are defined for the hosts in your cluster. The CPU factors are configured in the lsf.shared file.

The appropriate CPU scaling factor of the host or host model is used to adjust the actual CPU time limit at the execution host (the CPULIMIT parameter in the lsb.queues file). The DEFAULT_HOST_SPEC parameter in lsb.queues overrides the system DEFAULT_HOST_SPEC parameter in the lsb.params file. If you explicitly give a host specification when you submit a job with the bsub -c cpu_limit[/host_name | /host_model] command, the job-level specification overrides the values that are defined in the lsb.params and lsb.queues files.

RUN_WINDOWS
The time windows in a week during which jobs in the queue can run.

When a queue is out of its window or windows, no job in this queue is dispatched. In addition, when the end of a run window is reached, any running jobs from this queue are suspended until the beginning of the next run window, when they are resumed. The default is no restriction, or always open.

DISPATCH_WINDOWS
Dispatch windows are the time windows in a week during which jobs in the queue can be dispatched.

When a queue is out of its dispatch window or windows, no job in this queue is dispatched. Jobs that are already dispatched are not affected by the dispatch windows. The default is no restriction, or always open (that is, twenty-four hours a day, seven days a week). Dispatch windows are only applicable to batch jobs. Interactive jobs that are scheduled by LIM are controlled by another set of dispatch windows. Similar dispatch windows can be configured for individual hosts.

A window is displayed in the format begin_time-end_time. Time is specified in the format [day:]hour[:minute], where all fields are numbers in their respective legal ranges: 0(Sunday)-6 for day, 0-23 for hour, and 0-59 for minute. The default value for minute is 0 (on the hour). The default value for day is every day of the week. The begin_time and end_time of a window are separated by a dash (-), with no blank characters (SPACE and TAB) in between. Both begin_time and end_time must be present for a window. Windows are separated by blank characters.

USERS
A list of users who are allowed to submit jobs to this queue. LSF administrators can submit jobs to the queue even if they are not listed here.

User group names have a slash (/) added at the end of the group name. Use the bugroup command to see information about user groups.

If the fair share scheduling policy is enabled, users and LSF administrators cannot submit jobs to the queue unless they also have a share assignment.

HOSTS
A list of hosts where jobs in the queue can be dispatched.

Host group names have a slash (/) added at the end of the group name. Use the bmgroup command to see information about host groups.

NQS DESTINATION QUEUES
A list of NQS destination queues to which this queue can dispatch jobs.

When you submit a job with the bsub -q queue_name command, and the specified queue is configured to forward jobs to the NQS system, LSF routes your job to one of the NQS destination queues. The job runs on an NQS batch server host, which is not a member of the LSF cluster. Although the job runs on an NQS system outside the LSF cluster, it is still managed by LSF in almost the same way as jobs that run inside the cluster. Your batch jobs might be transparently sent to an NQS system to run. You can use any supported user interface, including LSF commands and NQS commands (see the lsnqs command) to submit, monitor, signal, and delete your batch jobs that are running in an NQS system.

ADMINISTRATORS
A list of queue administrators. The users whose names are specified here are allowed to operate on the jobs in the queue and on the queue itself.
PRE_EXEC
The job-based pre-execution command for the queue. The PRE_EXEC command runs on the execution host before the job that is associated with the queue is dispatched to the execution host (or to the first host selected for a parallel batch job).
POST_EXEC
The job-based post-execution command for the queue. The POST_EXEC command runs on the execution host after the job finishes.
HOST_PRE_EXEC
The host-based pre-execution command for the queue. The HOST_PRE_EXEC command runs on all execution hosts before the job that is associated with the queue is dispatched to the execution hosts. If a job-based pre-execution PRE_EXEC command is defined at the queue-level, application-level, or job-level, the HOST_PRE_EXEC command runs before PRE_EXEC command of any level. The host-based pre-execution command cannot be run on Windows systems.
HOST_POST_EXEC
The host-based post-execution command for the queue. The HOST_POST_EXEC command runs on all execution hosts after the job finishes. If a job-based post-execution POST_EXEC command is defined at the queue-level, application-level, or job-level, the HOST_POST_EXEC command runs after POST_EXEC command of any level. The host-based post-execution command cannot be run on Windows systems.
LOCAL_MAX_PREEXEC_RETRY_ACTION
The action to take on a job when the number of times to attempt its pre-execution command on the local cluster (LOCAL_MAX_PREEXEC_RETRY value) is reached.
REQUEUE_EXIT_VALUES
Jobs that exit with these values are automatically requeued.
RES_REQ
Resource requirements of the queue. Only the hosts that satisfy these resource requirements can be used by the queue.
RESRSV_LIMIT
Resource requirement limits of the queue. Queue-level RES_REQ rusage values (set in the lsb.queues file) must be in the range set by RESRSV_LIMIT, or the queue-level RES_REQ value is ignored. Merged RES_REQ rusage values from the job and application levels must be in the range that is shown by the RESRSV_LIMIT, or the job is rejected.
Maximum slot reservation time
The maximum time in seconds a slot is reserved for a pending job in the queue. For more information, see the SLOT_RESERVE=MAX_RESERVE_TIME[n] parameter in the lsb.queues file.
RESUME_COND
The conditions that must be satisfied to resume a suspended job on a host.
STOP_COND
The conditions that determine whether a job that is running on a host needs to be suspended.
JOB_STARTER
An executable file that runs immediately before the batch job, taking the batch job file as an input argument. All jobs that are submitted to the queue are run through the job starter, which is used to create a specific execution environment before the jobs themselves are processed.
SEND_JOBS_TO
LSF multicluster capability. List of remote queue names to which the queue forwards jobs.
RECEIVE_JOBS_FROM
LSF multicluster capability. List of remote cluster names from which the queue receives jobs.
IMPT_JOBBKLG
LSF multicluster capability. Specifies the pending job limit for a receive-jobs queue.
IMPT_TASKBKLG
LSF multicluster capability. Specifies the pending job task limit for a receive-jobs queue.
IMPT_JOBLIMIT
LSF multicluster capability. Specifies the number of starting jobs from remote clusters.
IMPT_TASKLIMIT
LSF multicluster capability. Specifies the number of starting job tasks from remote clusters.
PREEMPTION
PREEMPTIVE
The queue is preemptive. Jobs in this queue can preempt running jobs from lower-priority queues, even if the lower-priority queues are not specified as preemptive.
PREEMPTABLE
The queue is preemptable. Running jobs in this queue can be preempted by jobs in higher-priority queues, even if the higher-priority queues are not specified as preemptive.
RC_ACCOUNT
The account name (tag) that is assigned to hosts borrowed through LSF resource connector, so that they cannot be used by other user groups, users, or jobs.
RC_HOSTS
The list of Boolean resources that represent the host resources that LSF resource connector can borrow from a resource provider.
RERUNNABLE
If the RERUNNABLE field displays yes, jobs in the queue are rerunnable. Jobs in the queue are automatically restarted or rerun if the execution host becomes unavailable. However, a job in the queue is not restarted if you remove the rerunnable option from the job.
CHECKPOINT
If the CHKPNTDIR field is displayed, jobs in the queue are checkpointable. Jobs use the default checkpoint directory and period unless you specify other values. A job in the queue is not checkpointed if you remove the checkpoint option from the job.
CHKPNTDIR
Specifies the checkpoint directory by using an absolute or relative path name.
CHKPNTPERIOD
Specifies the checkpoint period in seconds.

Although the output of the bqueues command reports the checkpoint period in seconds, the checkpoint period is defined in minutes. The checkpoint period is defined with the bsub -k "checkpoint_dir []" option, or in the lsb.queues file.

JOB CONTROLS
The configured actions for job control. See the JOB_CONTROLS parameter in the lsb.queues file.

The configured actions are displayed in the format [action_type, command] where action_type is either SUSPEND, RESUME, or TERMINATE.

ADMIN ACTION COMMENT
If the LSF administrator specified an administrator comment with the -C option of a queue control commands (qclose, qopen, qact, qinact, or qhist), the comment text is displayed.
SLOT_SHARE
Share of job slots for queue-based fair share. Represents the percentage of running jobs (job slots) in use from the queue. The SLOT_SHARE value must be greater than zero.

The sum of SLOT_SHARE for all queues in the pool does not need to be 100%. It can be more or less, depending on your needs.

SLOT_POOL
Name of the pool of job slots the queue belongs to for queue-based fair share. A queue can belong to only one pool. All queues in the pool must share hosts.
MAX_SLOTS_IN_POOL
Maximum number of job slots available in the slot pool the queue belongs to for queue-based fair share. Defined in the first queue of the slot pool.
USE_PRIORITY_IN_POOL
Queue-based fair share only. After job scheduling occurs for each queue, this parameter enables LSF to dispatch jobs to any remaining slots in the pool in first-come first-served order across queues.
NO_PREEMPT_INTERVAL
The uninterrupted running time (minutes) that must pass before preemption is permitted. Configured in the lsb.queues file.
MAX_TOTAL_TIME_PREEMPT
The maximum total preemption time (minutes) above which preemption is not permitted. Configured in the lsb.queues file.
SHARE_INFO_FOR
User shares and dynamic priority information based on the scheduling policy in place for the queue.
USER/GROUP
Name of users or user groups who have access to the queue.
SHARES
Number of shares of resources that are assigned to each user or user group in this queue, as configured in the file lsb.queues. The shares affect dynamic user priority for when fair share scheduling is configured at the queue level.
PRIORITY
Dynamic user priority for the user or user group. Larger values represent higher priorities. Jobs belonging to the user or user group with the highest priority are considered first for dispatch.
In general, users or user groups with the following properties have higher PRIORITY:
  • Larger SHARES
  • Fewer STARTED and RESERVED jobs
  • Lower CPU_TIME and RUN_TIME
STARTED
Number of job slots that are used by running or suspended jobs that are owned by users or user groups in the queue.
RESERVED
Number of job slots that are reserved by the jobs that are owned by users or user groups in the queue.
CPU_TIME
Cumulative CPU time that is used by jobs that are run from the queue. Measured in seconds, to one decimal place.

LSF calculates the cumulative CPU time by using the actual (not normalized) CPU time. LSF uses a decay factor such that 1 hour of recently used CPU time decays to 0.1 hours after an interval of time that is specified by the HIST_HOURS parameter in the lsb.params file. The default for the HIST_HOURS parameter is 5 hours.

RUN_TIME
Wall-clock run time plus historical run time of jobs of users or user groups that are run in the queue. Measured in seconds.

LSF calculates the historical run time by using the actual run time of finished jobs. LSF uses a decay factor such that 1 hour of recently used run time decays to 0.1 hours after an interval of time that is specified by the HIST_HOURS parameter in the lsb.params file. The default for the HIST_HOURS parameter is 5 hours. Wall-clock run time is the run time of running jobs.

ADJUST
Dynamic priority calculation adjustment that is made by the user-defined fair share plug-in (libfairshareadjust.*).

The fair share adjustment is enabled and weighted by the parameter FAIRSHARE_ADJUSTMENT_FACTOR in the lsb.params file.

RUN_TIME_FACTOR
The weighting parameter for run_time within the dynamic priority calculation. If not defined for the queue, the cluster-wide value that is defined in the lsb.params file is used.
CPU_TIME_FACTOR
The dynamic priority calculation weighting parameter for CPU time. If not defined for the queue, the cluster-wide value that is defined in the lsb.params file is used.
ENABLE_HIST_RUN_TIME
Enables the use of historic run time (run time for completed jobs) in the dynamic priority calculation. If not defined for the queue, the cluster-wide value that is defined in the lsb.params file is used.
RUN_TIME_DECAY
Enables the decay of run time in the dynamic priority calculation. The decay rate is set by the parameter HIST_HOURS (set for the queue in the lsb.queues file or set for the cluster in the lsb.params file). If not defined for the queue, the cluster-wide value that is defined in the lsb.params file is used.
STARTED_JOBS
The number of started jobs and suspended jobs as used by the fair share scheduling algorithm. bqueues -l only displays this field if FAIRSHARE_JOB_COUNT=Y is enabled in the lsb.params file.
RESERVED_JOBS
The number of reserved jobs as used by the fair share scheduling algorithm. bqueues -l only displays this field if FAIRSHARE_JOB_COUNT=Y is enabled in the lsb.params file.
HIST_HOURS
Decay parameter for CPU time, run time, and historic run time. If not defined for the queue, the cluster-wide value that is defined in the lsb.params file is used.
FAIRSHARE_ADJUSTMENT_FACTOR
Enables and weights the dynamic priority calculation adjustment that is made by the user-defined fair share plug-in(libfairshareadjust.*). If not defined for the queue, the cluster-wide value that is defined in the lsb.params file is used.
RUN_JOB_FACTOR
The dynamic priority calculation weighting parameter for the number of job slots that are reserved and in use by a user. If not defined for the queue, the cluster-wide value that is defined in the lsb.params file is used.
COMMITTED_RUN_TIME_FACTOR
The dynamic priority calculation weighting parameter for committed run time. If not defined for the queue, the cluster-wide value that is defined in the lsb.params file is used.
JOB_SIZE_LIST
A list of job sizes (number of tasks) allowed on this queue, including the default job size that is assigned if the job submission does not request a job size. Configured in the lsb.queues file.
PEND_TIME_LIMIT
The pending time limit for a job in the queue. If a job remains pending for longer than this specified time limit, LSF sends a notification to IBM® Spectrum LSF RTM. Configured in the lsb.queues file.
ELIGIBLE_PEND_TIME_LIMIT
The eligible pending time limit for a job in the queue. If a job remains in an eligible pending state for longer than this specified time limit, LSF sends a notification to IBM Spectrum LSF RTM. Configured in the lsb.queues file.
RELAX_JOB_DISPATCH_ORDER
If the RELAX_JOB_DISPATCH_ORDER parameter is configured in the lsb.params or lsb.queues file, the allocation reuse duration, in minutes, is displayed.
NORM_FS
Normalized fair share factors, if the factors are not zero.

Recursive share tree output (-r)

In addition to the fields displayed for the -l option, the -r option displays the following fields:
SCHEDULING POLICIES
FAIRSHARE
The bqueues -r command recursively displays the entire share information tree that is associated with the queue.

See also

bugroup, nice, getrlimit, lsb.queues, bsub, bjobs, bhosts, badmin, mbatchd