lsb.resources

The lsb.resources file contains configuration information for resource allocation limits, exports, resource usage limits, and guarantee policies. This file is optional.

The lsb.resources file is stored in the directory LSB_CONFDIR/cluster_name/configdir, where LSB_CONFDIR is defined in the lsf.conf file.

Changing lsb.resources configuration

After changing the lsb.resources file, run badmin reconfig to reconfigure mbatchd.

#INCLUDE

Syntax

#INCLUDE "path-to-file"

Description

Inserts a configuration setting from another file to the current location. Use this directive to dedicate control of a portion of the configuration to other users or user groups by providing write access for the included file to specific users or user groups, and to ensure consistency of configuration file settings in different clusters (if you are using the LSF multicluster capability).

For more information, see Shared configuration file content.

#INCLUDE can be inserted anywhere in the local configuration file.

Default

Not defined.

Limit section

The Limit section sets limits for the maximum amount of the specified resources that must be available for different classes of jobs to start, and which resource consumers the limits apply to. Limits are enforced during job resource allocation.
Tip:

For limits to be enforced, jobs must specify rusage resource requirements (bsub -R or RES_REQ in lsb.queues).

The blimits command displays view current usage of resource allocation limits configured in Limit sections in lsb.resources:

Limit section structure

Each set of limits is defined in a Limit section enclosed by Begin Limit and End Limit.

A Limit section has two formats:
  • Vertical tabular
  • Horizontal

The file can contain sections in both formats. In either format, you must configure a limit for at least one consumer and one resource. The Limit section cannot be empty.

Vertical tabular format

Use the vertical format for simple configuration conditions involving only a few consumers and resource limits.

The first row consists of an optional NAME and the following keywords for:
  • Resource types:
    • Job slots (SLOTS) and per-processor job slots (SLOTS_PER_PROCESSOR).
    • Memory (MEM), in MB or the unit set in the LSF_UNIT_FOR_LIMITS parameter in the lsf.conf file.
    • Swap space (SWP), in MB or the unit set in the LSF_UNIT_FOR_LIMITS parameter in the lsf.conf file.
    • Temp space (TMP), in MB or the unit set in the LSF_UNIT_FOR_LIMITS parameter in the lsf.conf file.
    • Running and suspended (RUN, SSUSP, USUSP) jobs (JOBS).
    • Other shared resources (RESOURCE).
  • Consumer types:
    • Applications (APPS or PER_APP).
    • Queues (QUEUES or PER_QUEUE).
    • Hosts and host groups (HOSTS or PER_HOST).
    • Users and user groups (USERS or PER_USER).
    • Projects (PROJECTS or PER_PROJECT).
    • LSF License Scheduler projects (LIC_PROJECTS or PER_LIC_PROJECT).
Each subsequent row describes the configuration information for resource consumers and the limits that apply to them. Each line must contain an entry for each keyword. Use empty parentheses () or a dash (-) to to indicate an empty field. Fields cannot be left blank.
Tip:

Multiple entries must be enclosed in parentheses. For RESOURCE limits, RESOURCE names must be enclosed in parentheses.

Horizontal format

Use the horizontal format to give a name for your limits and to configure more complicated combinations of consumers and resource limits.

The first line of the Limit section gives the name of the limit configuration.

Each subsequent line in the Limit section consists of keywords identifying the resource limits:
  • Job slots (SLOTS) and per-processor job slots (SLOTS_PER_PROCESSOR).
  • Memory (MEM), in MB or the unit set in the LSF_UNIT_FOR_LIMITS parameter in the lsf.conf file.
  • Swap space (SWP), in MB or the unit set in the LSF_UNIT_FOR_LIMITS parameter in the lsf.conf file.
  • Temp space (TMP), in MB or the unit set in the LSF_UNIT_FOR_LIMITS parameter in the lsf.conf file.
  • Running and suspended (RUN, SSUSP, USUSP) jobs (JOBS).
  • Other shared resources (RESOURCE).
and the resource consumers to which the limits apply:
  • Applications (APPS or PER_APP).
  • Queues (QUEUES or PER_QUEUE).
  • Hosts and host groups (HOSTS or PER_HOST).
  • Users and user groups (USERS or PER_USER).
  • Projects (PROJECTS or PER_PROJECT).
  • LSF License Scheduler projects (LIC_PROJECTS or PER_LIC_PROJECT).

Example: Vertical tabular format

In the following limit configuration:
  • Jobs from user1 and user3 are limited to 2 job slots on hostA
  • Jobs from user2 on queue normal are limited to 20 MB of memory or the unit set in LSF_UNIT_FOR_LIMITS in lsf.conf.
  • The short queue can have at most 200 running and suspended jobs
Begin Limit
NAME    USERS           QUEUES   HOSTS     SLOTS  MEM   SWP  TMP  JOBS
limit1  (user1 user3)    -       hostA     2       -     -    -     -
 -       user2           normal       -     -      20     -    -     -
 -       -               short        -     -       -     -    -    200
End Limit

Jobs that do not match these limits; that is, all users except user1 and user3 running jobs on hostA and all users except user2 submitting jobs to queue normal, have no limits.

Example: Horizontal format

All users in user group ugroup1 except user1 using queue1 and queue2 and running jobs on hosts in host group hgroup1 are limited to 2 job slots per processor on each host:
Begin Limit 
# ugroup1 except user1 uses queue1 and queue2 with 2 job slots 
# on each host in hgroup1 
NAME          = limit1 
# Resources 
SLOTS_PER_PROCESSOR = 2 
#Consumers
QUEUES       = queue1 queue2 
USERS        = ugroup1 ~user1 
PER_HOST     = hgroup1 
End Limit

Compatibility with lsb.queues, lsb.users, and lsb.hosts

The Limit section does not support the keywords or format used in lsb.users, lsb.hosts, and lsb.queues. However, your existing job slot limit configuration in these files will continue to apply.

Job slot limits are the only type of limit you can configure in lsb.users, lsb.hosts, and lsb.queues. You cannot configure limits for user groups, host groups and projects in lsb.users, lsb.hosts, and lsb.queues. You should not configure any new resource allocation limits in lsb.users, lsb.hosts, and lsb.queues. Use this section to configure all new resource allocation limits, including job slot limits. Limits on running and suspended jobs can only be set in this section.

Existing limits in lsb.users, lsb.hosts, and lsb.queues with the same scope as a new limit in this section, but with a different value are ignored. The value of the new limit in this section is used. Similar limits with different scope enforce the most restrictive limit.

Parameters

  • APPS
  • ELIGIBLE_PEND_JOBS
  • HOSTS
  • INELIGIBLE
  • JOBS
  • JOBS_PER_SCHED_CYCLE
  • MEM
  • NAME
  • PER_APP
  • PER_HOST
  • PER_PROJECT
  • PER_QUEUE
  • PER_USER
  • PROJECTS
  • QUEUES
  • RESOURCE
  • SLOTS
  • SLOTS_PER_PROCESSOR
  • SWP
  • TMP
  • USERS

APPS

Syntax

APPS=all [~]application_profile_name ...

APPS

( [-] | all [~]application_profile_name ... )

Description

A space-separated list of application profile names on which limits are enforced. Limits are enforced on all application profiles listed.

The list must contain valid application profile names defined in lsb.applications.

To specify a per-application limit, use the PER_APP keyword. Do not configure APPS and PER_APP limits in the same Limit section.

In horizontal format, use only one APPS line per Limit section.

Use the keyword all to configure limits that apply to all applications in a cluster.

Use the not operator (~) to exclude applications from the all specification in the limit. This is useful if you have a large number of applications but only want to exclude a few applications from the limit definition.

In vertical tabular format, multiple application profile names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

None. If no limit is specified for PER_APP or APPS, no limit is enforced on any application profile.

Example

APPS=appA appB

ELIGIBLE_PEND_JOBS

Syntax

ELIGIBLE_PEND_JOBS=integer

Description

The maximum number of eligible jobs that are considered for dispatch in a single scheduling cycle. Specify a positive integer or 0. This parameter can only be used with the following consumer types:

  • USERS or PER_USER
  • QUEUES or PER_QUEUE

The all group, or any group containing all, is not supported.

The limit is ignored for any other defined consumer types.

Valid values

Any positive integer or 0.

Default

No limit

Example

ELIGIBLE_PEND_JOBS=10

HOSTS

Syntax

HOSTS=all [~]host_name ... | all [~]host_group ...

HOSTS

( [-] | all [~]host_name ... | all [~]host_group ... )

Description

A space-separated list of hosts, host groups defined in lsb.hosts on which limits are enforced. Limits are enforced on all hosts or host groups listed.

If a group contains a subgroup, the limit also applies to each member in the subgroup recursively.

To specify a per-host limit, use the PER_HOST keyword. Do not configure HOSTS and PER_HOST limits in the same Limit section.

If you specify MEM, TMP, or SWP as a percentage, you must specify PER_HOST and list the hosts that the limit is to be enforced on. You cannot specify HOSTS.

In horizontal format, use only one HOSTS line per Limit section.

Use the keyword all to configure limits that apply to all hosts in a cluster.

Use the not operator (~) to exclude hosts from the all specification in the limit. This is useful if you have a large cluster but only want to exclude a few hosts from the limit definition.

In vertical tabular format, multiple host names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

all (limits are enforced on all hosts in the cluster).

Example 1

HOSTS=Group1 ~hostA hostB hostC

Enforces limits on hostB, hostC, and all hosts in Group1 except for hostA.

Example 2

HOSTS=all ~group2 ~hostA

Enforces limits on all hosts in the cluster, except for hostA and the hosts in group2.

Example 3

HOSTS         SWP (all ~hostK ~hostM)           10

Enforces a 10 MB (or the unit set in LSF_UNIT_FOR_LIMITS in lsf.conf) swap limit on all hosts in the cluster, except for hostK and hostM

INELIGIBLE

Syntax

INELIGIBLE=Y|y|N|n

Description

If set as y or Y in the specific Limit section and the job cannot be scheduled due to this limit, the LSF scheduler will put the job into an ineligible pending state. LSF calculates the ineligible pending time for this job and the job's priority does not increase.
Note: The following Limit types are compatible with the INELIGIBLE parameter: JOBS, USERS, PER_USER, QUEUES, PER_QUEUE, PROJECTS, PER_PROJECT, LIC_PROJECTS, and PER_LIC_PROJECT.

Default

INELIGIBLE=N

Example

INELIGIBLE=Y

JOBS

Syntax

JOBS=integer

JOBS

- | integer

Description

Maximum number of jobs that are available to resource consumers, including running or suspended (RUN, SSUSP, USUSP) jobs, as well as jobs that reserved slots but are still pending. Specify a positive integer greater than or equal 0. Job limits can be defined in both vertical and horizontal limit formats.

If preemption is enabled, the JOBS limit does not block the preemption based on slots.

With multicluster resource lease models, this limit applies only to local hosts being used by the local cluster. The job limit for hosts exported to a remote cluster is determined by the host export policy, not by this parameter. The job limit for borrowed hosts is determined by the host export policy of the remote cluster.

If SLOTS are configured in the Limit section, the most restrictive limit is applied.

If HOSTS are configured in the Limit section, JOBS is the number of running and suspended jobs on a host. If preemptive scheduling is used, the suspended jobs are not counted against the job limit.

Use this parameter to prevent a host from being overloaded with too many jobs, and to maximize the throughput of a machine.

If only QUEUES are configured in the Limit section, JOBS is the maximum number of jobs that can run in the listed queues.

If only USERS are configured in the Limit section, JOBS is the maximum number of jobs that the users or user groups can run.

If only HOSTS are configured in the Limit section, JOBS is the maximum number of jobs that can run on the listed hosts.

If only PROJECTS are configured in the Limit section, JOBS is the maximum number of jobs that can run under the listed projects.

Use APPS or PER_APP, QUEUES or PER_QUEUE, USERS or PER_USER, HOSTS or PER_HOST, LIC_PROJECTS or PER_LIC_PROJECT, and PROJECTS or PER_PROJECT in combination to further limit jobs available to resource consumers.

In horizontal format, use only one JOBS line per Limit section.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

Default

No limit

Example

JOBS=20

JOBS_PER_SCHED_CYCLE

Syntax

JOBS_PER_SCHED_CYCLE=integer

Description

Use ELIGIBLE_PEND_JOBS instead. This parameter is only maintained for backwards compatibility.

Maximum number of jobs that are considered for dispatch in a single scheduling cycle. Specify a positive integer or 0. This parameter can only be used with the following consumer types:

  • USERS or PER_USER
  • QUEUES or PER_QUEUE

The all group, or any group containing all, is not supported.

The limit is ignored for any other defined consumer types.

Valid values

Any positive integer or 0.

Default

No limit

Example

JOBS_PER_SCHED_CYCLE=10

LIC_PROJECTS

Syntax

LIC_PROJECTS=all [~]lic_project_name ...

LIC_PROJECTS

( [-] | all [~]lic_project_name ... )

Description

A space-separated list of LSF License Scheduler project names on which limits are enforced. Limits are enforced on all projects listed. Each project name can be up to 511 characters long.

To specify a per-project limit, use the PER_LIC_PROJECT keyword. Do not configure LIC_PROJECTS and PER_PROJECT limits in the same Limit section.

In horizontal format, use only one LIC_PROJECTS line per Limit section.

Use the keyword all to configure limits that apply to all projects in a cluster.

Use the not operator (~) to exclude projects from the all specification in the limit. This is useful if you have a large number of projects but only want to exclude a few projects from the limit definition.

In vertical tabular format, multiple project names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

all (limits are enforced on all projects in the cluster)

Example

LIC_PROJECTS=licprojA licprojB

MEM

Syntax

MEM=integer[%]

MEM

- | integer[%]

Description

Maximum amount of memory available to resource consumers. Specify a value in MB or the unit set in LSF_UNIT_FOR_LIMITS in lsf.conf as a positive integer greater than or equal 0.

The Limit section is ignored if MEM is specified as a percentage either without PER_HOST, or with HOSTS.

In horizontal format, use only one MEM line per Limit section.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

If only QUEUES are configured in the Limit section, MEM must be an integer value. MEM is the maximum amount of memory available to the listed queues.

If only USERS are configured in the Limit section, MEM must be an integer value. MEM is the maximum amount of memory that the users or user groups can use.

If only HOSTS are configured in the Limit section, MEM must be an integer value. It cannot be a percentage. MEM is the maximum amount of memory available to the listed hosts.

If only PROJECTS are configured in the Limit section, MEM must be an integer value. MEM is the maximum amount of memory available to the listed projects.

Use APPS or PER_APP, QUEUES or PER_QUEUE, USERS or PER_USER, HOSTS or PER_HOST, LIC_PROJECTS or PER_LIC_PROJECT, and PROJECTS or PER_PROJECT in combination to further limit memory available to resource consumers.

Default

No limit

Example

MEM=20

NAME

Syntax

NAME=limit_name

NAME

- | limit_name

Description

Name of the Limit section

Specify any ASCII string 40 characters or less. You can use letters, digits, underscores (_) or dashes (-). You cannot use blank spaces.

If duplicate limit names are defined, the Limit section is ignored. If value of NAME is not defined in vertical format, or defined as (-), blimtis displays NONAMEnnn.

Default

None. In horizontal format, you must provide a name for the Limit section. NAME is optional in the vertical format.

Example

NAME=short_limits

PER_APP

Syntax

PER_APP=all [~]application_profile_name ..

PER_APP

( [-] | all [~]application_profile_name ... )

Description

A space-separated list of application profile names on which limits are enforced. Limits are enforced on jobs submitted to each application profile listed.

Do not configure PER_APP and APPS limits in the same Limit section.

In horizontal format, use only one PER_APP line per Limit section.

Use the keyword all to configure limits that apply to each application in a cluster.

Use the not operator (~) to exclude applications from the all specification in the limit.

In vertical tabular format, multiple application profile names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

None. If no limit is specified for PER_APP or APPS, no limit is enforced on any application profile.

Example

PER_APP=app1 app2

PER_HOST

Syntax

PER_HOST=all [~]host_name ... | all [~]host_group ...

PER_HOST

( [-] | all [~]host_name ... | all [~]host_group ... )

Description

A space-separated list of host or host groups defined in lsb.hosts on which limits are enforced. Limits are enforced on each host or individually to each host of the host group listed. If a group contains a subgroup, the limit also applies to each member in the subgroup recursively.

Do not configure PER_HOST and HOSTS limits in the same Limit section.

In horizontal format, use only one PER_HOST line per Limit section.

If you specify MEM, TMP, or SWP as a percentage, you must specify PER_HOST and list the hosts that the limit is to be enforced on. You cannot specify HOSTS.

Use the keyword all to configure limits that apply to each host in a cluster. If host groups are configured, the limit applies to each member of the host group, not the group as a whole.

Use the not operator (~) to exclude hosts or host groups from the all specification in the limit. This is useful if you have a large cluster but only want to exclude a few hosts from the limit definition.

In vertical tabular format, multiple host names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

None. If no limit is specified for PER_HOST or HOST, no limit is enforced on any host or host group.

Example

PER_HOST=hostA hgroup1 ~hostC

PER_LIC_PROJECT

Syntax

PER_LIC_PROJECT=all [~]lic_project_name ...

PER_LIC_PROJECT

( [-] | all [~]lic_project_name ... )

Description

A space-separated list of LSF License Scheduler project names on which limits are enforced. Limits are enforced on each project listed. Each project name can be up to 511 characters long.

Do not configure PER_LIC_PROJECT and LIC_PROJECTS limits in the same Limit section.

In horizontal format, use only one PER_LIC_PROJECT line per Limit section.

Use the keyword all to configure limits that apply to each LSF License Scheduler project in a cluster.

Use the not operator (~) to exclude LSF License Scheduler projects from the all specification in the limit.

In vertical tabular format, multiple project names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

None. If no limit is specified for PER_LIC_PROJECT or LIC_PROJECTS, no limit is enforced on any LSF License Scheduler project.

Example

PER_PROJECT=licproj1 licproj2

PER_PROJECT

Syntax

PER_PROJECT=all [~]project_name ...

PER_PROJECT

( [-] | all [~]project_name ... )

Description

A space-separated list of project names on which limits are enforced. Limits are enforced on each project listed. Each project name can be up to 511 characters long.

Do not configure PER_PROJECT and PROJECTS limits in the same Limit section.

In horizontal format, use only one PER_PROJECT line per Limit section.

Use the keyword all to configure limits that apply to each project in a cluster.

Use the not operator (~) to exclude projects from the all specification in the limit.

In vertical tabular format, multiple project names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

None. If no limit is specified for PER_PROJECT or PROJECTS, no limit is enforced on any project.

Example

PER_PROJECT=proj1 proj2

PER_QUEUE

Syntax

PER_QUEUE=all [~]queue_name ..

PER_QUEUE

( [-] | all [~]queue_name ... )

Description

A space-separated list of queue names on which limits are enforced. Limits are enforced on jobs submitted to each queue listed.

Do not configure PER_QUEUE and QUEUES limits in the same Limit section.

In horizontal format, use only one PER_QUEUE line per Limit section.

Use the keyword all to configure limits that apply to each queue in a cluster.

Use the not operator (~) to exclude queues from the all specification in the limit. This is useful if you have a large number of queues but only want to exclude a few queues from the limit definition.

In vertical tabular format, multiple queue names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

None. If no limit is specified for PER_QUEUE or QUEUES, no limit is enforced on any queue.

Example

PER_QUEUE=priority night

PER_USER

Syntax

PER_USER=all [~]user_name ... | all [~]user_group ...

PER_USER

( [-] | all [~]user_name ... | all [~]user_group ... )

Description

A space-separated list of user names or user groups on which limits are enforced. Limits are enforced on each user or individually to each user in the user group listed. If a user group contains a subgroup, the limit also applies to each member in the subgroup recursively.

User names must be valid login names. User group names can be LSF user groups or UNIX and Windows user groups. Note that for LSF and UNIX user groups, the groups must be specified in a UserGroup section in lsb.users first.

Do not configure PER_USER and USERS limits in the same Limit section.

In horizontal format, use only one PER_USER line per Limit section.

Use the keyword all to configure limits that apply to each user in a cluster. If user groups are configured, the limit applies to each member of the user group, not the group as a whole.

Use the not operator (~) to exclude users or user groups from the all specification in the limit. This is useful if you have a large number of users but only want to exclude a few users from the limit definition.

In vertical tabular format, multiple user names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

None. If no limit is specified for PER_USER or USERS, no limit is enforced on any user or user group.

Example

PER_USER=user1 user2 ugroup1 ~user3

PROJECTS

Syntax

PROJECTS=all [~]project_name ...

PROJECTS

( [-] | all [~]project_name ... )

Description

A space-separated list of project names on which limits are enforced. Limits are enforced on all projects listed. Each project name can be up to 511 characters long.

To specify a per-project limit, use the PER_PROJECT keyword. Do not configure PROJECTS and PER_PROJECT limits in the same Limit section.

In horizontal format, use only one PROJECTS line per Limit section.

Use the keyword all to configure limits that apply to all projects in a cluster.

Use the not operator (~) to exclude projects from the all specification in the limit. This is useful if you have a large number of projects but only want to exclude a few projects from the limit definition.

In vertical tabular format, multiple project names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

all (limits are enforced on all projects in the cluster)

Example

PROJECTS=projA projB

QUEUES

Syntax

QUEUES=all [~]queue_name ...

QUEUES

( [-] | all [~]queue_name ... )

Description

A space-separated list of queue names on which limits are enforced. Limits are enforced on all queues listed.

The list must contain valid queue names defined in lsb.queues.

To specify a per-queue limit, use the PER_QUEUE keyword. Do not configure QUEUES and PER_QUEUE limits in the same Limit section.

In horizontal format, use only one QUEUES line per Limit section.

Use the keyword all to configure limits that apply to all queues in a cluster.

Use the not operator (~) to exclude queues from the all specification in the limit. This is useful if you have a large number of queues but only want to exclude a few queues from the limit definition.

In vertical tabular format, multiple queue names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

all (limits are enforced on all queues in the cluster)

Example

QUEUES=normal night

RESOURCE

Syntax

RESOURCE=[shared_resource,integer] [[shared_resource,integer] ...]

RESOURCE

( [[shared_resource,integer] [[shared_resource,integer] ...] )

Description

Maximum amount of any user-defined shared resource available to consumers.

In horizontal format, use only one RESOURCE line per Limit section.

In vertical tabular format, resource names must be enclosed in parentheses.

In vertical tabular format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

None

Examples

RESOURCE=[stat_shared,4]
Begin Limit
RESOURCE                       PER_HOST
([stat_shared,4])              (all ~hostA)
([dyn_rsrc,1] [stat_rsrc,2])  (hostA)
End Limit

SLOTS

Syntax

SLOTS=integer

SLOTS

- | integer

Description

Maximum number of job slots available to resource consumers. Specify a positive integer greater than or equal 0.

With multicluster resource lease models, this limit applies only to local hosts being used by the local cluster. The job slot limit for hosts exported to a remote cluster is determined by the host export policy, not by this parameter. The job slot limit for borrowed hosts is determined by the host export policy of the remote cluster.

If JOBS are configured in the Limit section, the most restrictive limit is applied.

If HOSTS are configured in the Limit section, SLOTS is the number of running and suspended jobs on a host. The SLOTS limit blocks preemption from occurring.

To fully use the CPU resource on multiprocessor hosts, make the number of job slots equal to or greater than the number of processors.

Use this parameter to prevent a host from being overloaded with too many jobs, and to maximize the throughput of a machine.

Use an exclamation point (!) to make the number of job slots equal to the number of CPUs on a host.

If the number of CPUs in a host changes dynamically, mbatchd adjusts the maximum number of job slots per host accordingly. Allow the mbatchd up to 10 minutes to get the number of CPUs for a host. During this period the value of SLOTS is 1.

If only QUEUES are configured in the Limit section, SLOTS is the maximum number of job slots available to the listed queues.

If only USERS are configured in the Limit section, SLOTS is the maximum number of job slots that the users or user groups can use.

If only HOSTS are configured in the Limit section, SLOTS is the maximum number of job slots that are available to the listed hosts.

If only PROJECTS are configured in the Limit section, SLOTS is the maximum number of job slots that are available to the listed projects.

Use APPS or PER_APP, QUEUES or PER_QUEUE, USERS or PER_USER, HOSTS or PER_HOST, LIC_PROJECTS or PER_LIC_PROJECT, and PROJECTS or PER_PROJECT in combination to further limit job slots per processor available to resource consumers.

In horizontal format, use only one SLOTS line per Limit section.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

Default

No limit

Example

SLOTS=20

SLOTS_PER_PROCESSOR

Syntax

SLOTS_PER_PROCESSOR=number

SLOTS_PER_PROCESSOR

- | number

Description

Per processor job slot limit, based on the number of processors on each host affected by the limit.

Maximum number of job slots that each resource consumer can use per processor. This job slot limit is configured per processor so that multiprocessor hosts will automatically run more jobs.

You must also specify PER_HOST and list the hosts on which the limit is to be enforced. The Limit section is ignored if SLOTS_PER_PROCESSOR is specified with out PER_HOST, or with HOSTS.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

To fully use the CPU resource on multiprocessor hosts, make the number of job slots equal to or greater than the number of processors.

Use this parameter to prevent a host from being overloaded with too many jobs, and to maximize the throughput of a machine.

This number can be a fraction such as 0.5, so that it can also serve as a per-CPU limit on multiprocessor machines. This number is rounded up to the nearest integer equal to or greater than the total job slot limits for a host. For example, if SLOTS_PER_PROCESSOR is 0.5, on a 4-CPU multiprocessor host, users can only use up to 2 job slots at any time. On a single-processor machine, users can use 1 job slot.

Use an exclamation point (!) to make the number of job slots equal to the number of CPUs on a host.

If the number of CPUs in a host changes dynamically, mbatchd adjusts the maximum number of job slots per host accordingly. Allow the mbatchd up to 10 minutes to get the number of CPUs for a host. During this period the number of CPUs is 1.

If only QUEUES and PER_HOST are configured in the Limit section, SLOTS_PER_PROCESSOR is the maximum amount of job slots per processor available to the listed queues for any hosts, users or projects.

If only USERS and PER_HOST are configured in the Limit section, SLOTS_PER_PROCESSOR is the maximum amount of job slots per processor that the users or user groups can use on any hosts, queues, license projects, or projects.

If only PER_HOST is configured in the Limit section, SLOTS_PER_PROCESSOR is the maximum amount of job slots per processor available to the listed hosts for any users, queues or projects.

If only PROJECTS and PER_HOST are configured in the Limit section, SLOTS_PER_PROCESSOR is the maximum amount of job slots per processor available to the listed projects for any users, queues or hosts.

Use APPS or PER_APP, QUEUES or PER_QUEUE, USERS or PER_USER, PER_HOST, LIC_PROJECTS or PER_LIC_PROJECT, and PROJECTS or PER_PROJECT in combination to further limit job slots per processor available to resource consumers.

Default

No limit

Example

SLOTS_PER_PROCESSOR=2

SWP

Syntax

SWP=integer[%]

SWP

- | integer[%]

Description

Maximum amount of swap space available to resource consumers. Specify a value in MB or the unit set in LSF_UNIT_FOR_LIMITS in lsf.conf as a positive integer greater than or equal 0.

The Limit section is ignored if SWP is specified as a percentage without PER_HOST, or with HOSTS.

In horizontal format, use only one SWP line per Limit section.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

If only USERS are configured in the Limit section, SWP must be an integer value. SWP is the maximum amount of swap space that the users or user groups can use on any hosts, queues or projects.

If only HOSTS are configured in the Limit section, SWP must be an integer value. SWP is the maximum amount of swap space available to the listed hosts for any users, queues or projects.

If only PROJECTS are configured in the Limit section, SWP must be an integer value. SWP is the maximum amount of swap space available to the listed projects for any users, queues or hosts.

If only LIC_PROJECTS are configured in the Limit section, SWP must be an integer value. SWP is the maximum amount of swap space available to the listed projects for any users, queues, projects, or hosts.

Use APPS or PER_APP, QUEUES or PER_QUEUE, USERS or PER_USER, HOSTS or PER_HOST, LIC_PROJECTS or PER_LIC_PROJECT, and PROJECTS or PER_PROJECT in combination to further limit swap space available to resource consumers.

Default

No limit

Example

SWP=60

TMP

Syntax

TMP=integer[%]

TMP

- | integer[%]

Description

Maximum amount of tmp space available to resource consumers. Specify a value in MB or the unit set in LSF_UNIT_FOR_LIMITS in lsf.conf as a positive integer greater than or equal 0.

The Limit section is ignored if TMP is specified as a percentage without PER_HOST, or with HOSTS.

In horizontal format, use only one TMP line per Limit section.

In vertical format, use empty parentheses () or a dash (-) to indicate the default value (no limit). Fields cannot be left blank.

If only QUEUES are configured in the Limit section, TMP must be an integer value. TMP is the maximum amount of tmp space available to the listed queues for any hosts, users projects.

If only USERS are configured in the Limit section, TMP must be an integer value. TMP is the maximum amount of tmp space that the users or user groups can use on any hosts, queues or projects.

If only HOSTS are configured in the Limit section, TMP must be an integer value. TMP is the maximum amount of tmp space available to the listed hosts for any users, queues or projects.

If only PROJECTS are configured in the Limit section, TMP must be an integer value. TMP is the maximum amount of tmp space available to the listed projects for any users, queues or hosts.

If only LIC_PROJECTS are configured in the Limit section, TMP must be an integer value. TMP is the maximum amount of tmp space available to the listed projects for any users, queues, projects, or hosts.

Use APPS or PER_APP, QUEUES or PER_QUEUE, USERS or PER_USER, HOSTS or PER_HOST, LIC_PROJECTS or PER_LIC_PROJECT, and PROJECTS or PER_PROJECT in combination to further limit tmp space available to resource consumers.

Default

No limit

Example

TMP=20%

USERS

Syntax

USERS=all [~]user_name ... | all [~]user_group ...

USERS

( [-] | all [~]user_name ... | all [~]user_group ... )

Description

A space-separated list of user names or user groups on which limits are enforced. Limits are enforced on all users or groups listed. Limits apply to a group as a whole.

If a group contains a subgroup, the limit also applies to each member in the subgroup recursively.

User names must be valid login names. User group names can be LSF user groups or UNIX and Windows user groups. UNIX user groups must be configured in lsb.user.

To specify a per-user limit, use the PER_USER keyword. Do not configure USERS and PER_USER limits in the same Limit section.

In horizontal format, use only one USERS line per Limit section.

Use the keyword all to configure limits that apply to all users or user groups in a cluster.

Use the not operator (~) to exclude users or user groups from the all specification in the limit. This is useful if you have a large number of users but only want to exclude a few users or groups from the limit definition.

In vertical format, multiple user names must be enclosed in parentheses.

In vertical format, use empty parentheses () or a dash (-) to indicate an empty field. Fields cannot be left blank.

Default

all (limits are enforced on all users in the cluster)

Example

USERS=user1 user2

GuaranteedResourcePool section

Defines a guarantee policy. A guarantee is a commitment to ensure availability of a number of resources to a service class, where a service class is a container for jobs. Each guarantee pool can provide guarantees to multiple service classes, and each service class can have guarantees in multiple pools.

To use guaranteed resources, configure service classes with GOALS=[GUARANTEE] in the lsb.serviceclasses file.

Note: Hosts that are not ready for dispatched jobs are not assigned to the guaranteed resource pool. This include hosts that are in unavail or unreach status, or hosts that are closed by the administrator.

GuaranteedResourcePool section structure

Each resource pool is defined in a GuaranteedResourcePool section and enclosed by Begin GuaranteedResourcePool and End GuaranteedResourcePool.

You must configure a NAME, TYPE and DISTRIBUTION for each GuaranteedResourcePool section.

The order of GuaranteedResourcePool sections is important, as the sections are evaluated in the order configured. Each host can only be in one pool of host-based resources (slots, hosts, or package, each of which can have its own GuaranteedResourcePool section); ensure all GuaranteedResourcePool sections (except the last one) define the HOSTS parameter, so they do not contain the default of all hosts.

When LSF starts up, it goes through the hosts and assigns each host to a pool that will accept the host based on the pool's RES_SELECT and HOSTS parameters. If multiple pools will accept the host, the host will be assigned to the first pool according to the configuration order of the pools.

Example GuaranteedResourcePool sections

Begin GuaranteedResourcePool
NAME = linuxGuarantee
TYPE = slots
HOSTS = linux_group
DISTRIBUTION = [sc1, 25] [sc2, 30]
LOAN_POLICIES=QUEUES[all] DURATION[15]
DESCRIPTION = This is the resource pool for the hostgroup linux_group, with\ 
30 slots guaranteed to sc2. Resources are loaned to 25 slots guaranteed to\ 
sc1 jobs from any queue with run times of up to 15 minutes.
End GuaranteedResourcePool
Begin GuaranteedResourcePool
NAME = x86Guarantee
TYPE = slots
HOSTS = linux_x86
DISTRIBUTION = [sc1, 25]
LOAN_POLICIES=QUEUES[short_jobs] DURATION[15]
DESCRIPTION = This is the resource pool for the hostgroup\ 
linux_x86 using the short_jobs queue, with 25 slots guaranteed\ 
to sc1. Resources are loaned to jobs for up to 15 minutes.
End GuaranteedResoucePool
Begin GuaranteedResourcePool
NAME = resource2pool
TYPE = resource[f2]
DISTRIBUTION = [sc1, 25%] [sc2, 25%]
LOAN_POLICIES=QUEUES[all] DURATION[10]
DESCRIPTION = This is the resource pool for all f2 resources managed by\
LSF License Scheduler, with 25% guaranteed to each of sc1 and sc2. \
Resources are loaned to jobs from any queue with runtimes of up to 10 minutes.
End GuaranteedResourcePool

Parameters

  • NAME
  • TYPE
  • HOSTS
  • RES_SELECT
  • DISTRIBUTION
  • LOAN_POLICIES
  • DESCRIPTION
  • ADMINISTRATORS

NAME

Syntax

NAME=name

Description

The name of the guarantee policy.

Default

None. You must provide a name for the guarantee.

TYPE

Syntax

TYPE = slots | hosts | resource[shared_resource] | package[slots=[slots_per_package][:mem=mem_per_package]]

Description

Defines the type of resources to be guaranteed in this guarantee policy. These can either be slots, whole hosts, packages composed of an amount of slots and memory bundled on a single host, or licenses managed by LSF License Scheduler.

Specify resource[license] to guarantee licenses (which must be managed by LSF License Scheduler) to service class guarantee jobs.

Specifies the combination of memory and slots that defines the packages that will be treated as resources reserved by service class guarantee jobs. For example,

TYPE=package[slots=1:mem=1000]

Each unit guaranteed is for one slot and 1000 MB of memory.

LSF_UNIT_FOR_LIMITS in lsf.conf determines the units of memory in the package definition. The default value of LSF_UNIT_FOR_LIMITS is MB, therefore the guarantee is for 1000 MB of memory.

A package need not have both slots and memory. Setting TYPE=package[slots=1] is the equivalent of slots. In order to provide guarantees for parallel jobs that require multiple CPUs on a single host where memory is not an important resource, you can use packages with multiple slots and not specify mem.

Each host can belong to at most one slot, host, or package guarantee pool.

Default

None. You must specify the type of guarantee.

HOSTS

Syntax

HOSTS=all | allremote | all@cluster_name ... | [~]host_name | [~]host_group

Description

A space-separated list of hosts or host groups defined in lsb.hosts, on which the guarantee is enforced.

Use the keyword all to include all hosts in a cluster. Use the not operator (~) to exclude hosts from the all specification in the guarantee.

Use host groups for greater flexibility, since host groups have additional configuration options.

Ensure all GuaranteedResourcePool sections (except the last one) define the HOSTS or RES_SELECT parameter, so they do not contain the default of all hosts.

Default

all

RES_SELECT

Syntax

RES_SELECT=res_req

Description

Resource requirement string with which all hosts defined by the HOSTS parameter are further filtered. For example, RES_SELECT=type==LINUX86

Only static host attributes can be used in RES_SELECT. Do not use consumable resources or dynamic resources.

Default

None. RES_SELECT is optional.

DISTRIBUTION

Syntax

DISTRIBUTION=([service_class_name, amount[%]]...)

Description

Assigns the amount of resources in the pool to the specified service classes, where amount can be an absolute number or a percentage of the resources in the pool. The outer brackets are optional.

When configured as a percentage, the total can exceed 100% but each assigned percentage cannot exceed 100%. For example:

DISTRIBUTION=[sc1,50%] [sc2,50%] [sc3,50%] is an acceptable configuration even though the total percentages assigned add up to 150%.

DISTRIBUTION=[sc1,120%] is not an acceptable configuration, since the percentage for sc1 is greater than 100%.

Each service class must be configured in lsb.serviceclasses, with GOALS=[GUARANTEE].

When configured as a percentage and there are remaining resources to distribute (because the calculated number of slots is rounded down), LSF distributes the remaining resources using round-robin distribution, starting with the first configured service class. Therefore, the service classes that you define first will receive additional resources regardless of the configured percentage. For example, there are 93 slots in a pool and you configure the following guarantee distribution:

DISTRIBUTION=[sc1,30%] [sc2,10%] [sc3,30%]

The number of slots assigned to guarantee policy are: floor((30% + 10% + 30%)*(93 slots)) = 65 slots

The slots are distributed to the service classes as follows:

  • sc1_slots = floor(30%*92) = 27
  • sc2_slots = floor(10%*92) = 9
  • sc3_slots = floor(30%*92) = 27

As a result of rounding down, the total number of distributed slots is 27+9+27=63 slots, which means there are two remaining slots to distribute. Using round-robin distribution, LSF distributes one slot each to sc1 and sc2 because these service classes are defined first. Therefore, the final slot distribution to the service classes are as follows:

  • sc1_slots = floor(30%*92) + 1 = 28
  • sc2_slots = floor(10%*92) + 1 = 10
  • sc3_slots = floor(30%*92) = 27
If you configure sc3 before sc2 (DISTRIBUTION=[sc1,30%] [sc3,30%] [sc2,10%]), LSF distributes the two remaining slots to sc1 and sc3. Therefore, the slots are distributed as follows:
  • sc1_slots = floor(30%*92) + 1 = 28
  • sc3_slots = floor(30%*92) + 1 = 28
  • sc2_slots = floor(10%*92) = 9

Default

None. You must provide a distribution for the resource pool.

LOAN_POLICIES

Syntax

LOAN_POLICIES=QUEUES[[!]queue_name ...|all] [CLOSE_ON_DEMAND] [DURATION[minutes][IDLE_BUFFER[amount[%]]]

Description

By default, LSF will reserve sufficient resources in each guarantee pool to honor the configured guarantees. To increase utilization, use LOAN_POLICIES to allow any job (with or without guarantees) to use these reserved resources when not needed by jobs with guarantees. When resources are loaned out, jobs with guarantees may have to wait for jobs to finish before they are able to dispatch in the pool.

QUEUES[all | [!]queue_name ...] loans only to jobs from the specified queue or queues. You must specify which queues are permitted to borrow resources reserved for guarantees. Specify an exclamation point (!) before the queue name for that queue to ignore any IDLE_BUFFER and DURATION policies when deciding whether a job in the queue can borrow unused guaranteed resources.

When CLOSE_ON_DEMAND is specified, LSF stops loaning out from a pool whenever there is pending demand from jobs with guarantees in the pool.

DURATION[minutes] only allows jobs to borrow the resources if the job run limit (or estimated run time) is no larger than minutes. Loans limited by job duration make the guaranteed resources available within the time specified by minutes. Jobs running longer than the estimated run time will run to completion regardless of the actual run time.

IDLE_BUFFER[amount[%]] enables LSF to try to keep idle the amount of resources specified in IDLE_BUFFER as long as there are unused guarantees. These idle resources can only be used to honor guarantees. Whenever the number of free resources in the pool drops below the IDLE_BUFFER amount, LSF stops loaning resources from the pool.

Note: The RETAIN keyword is deprecated in LSF, Version 10.1.0 Fix Pack 10. Use IDLE_BUFFER instead of RETAIN.

Default

None. LOAN_POLICIES is optional.

DESCRIPTION

Syntax

DESCRIPTION=description

Description

A description of the guarantee policy.

Default

None. DESCRIPTION is optional.

ADMINISTRATORS

Syntax

ADMINISTRATORS=user_name | user_group ...

Description

When defined by the user, an administrator can manage the corresponding GuaranteedResourcePool by using the bconf command.

Note: To specify a Windows user account or user group, include the domain name in uppercase letters: DOMAIN_NAME\user_name or DOMAIN_NAME\user_group.

An LSF administrator will also be able to manage a GuaranteedResourcePool by using the bconfcommand even if the LSF administrator is excluded in the user_group explicitly by using NOT operator (~).

Default

None. You must be a cluster administrator to operate on this guaranteed resource pool by bconf.

HostExport section

Note: This section is deprecated and might be removed in a future version of LSF.

Defines an export policy for a host or a group of related hosts. Defines how much of each host’s resources are exported, and how the resources are distributed among the consumers.

Each export policy is defined in a separate HostExport section, so it is normal to have multiple HostExport sections in lsb.resources.

HostExport section structure

Use empty parentheses ( ) or a dash (-) to specify the default value for an entry. Fields cannot be left blank.

Example HostExport section

Begin HostExport PER_HOST= hostA hostB SLOTS= 4 DISTRIBUTION= [cluster1, 1] 
[cluster2, 3] MEM= 100 SWP= 100 End HostExport

Parameters

  • PER_HOST
  • RES_SELECT
  • NHOSTS
  • DISTRIBUTION
  • MEM
  • SLOTS
  • SWAP
  • TYPE

PER_HOST

Syntax

PER_HOST=host_name...

Description

Required when exporting special hosts.

Determines which hosts to export. Specify one or more LSF hosts by name. Separate names by space.

RES_SELECT

Syntax

RES_SELECT=res_req

Description

Required when exporting workstations.

Determines which hosts to export. Specify the selection part of the resource requirement string (without quotes or parentheses), and LSF will automatically select hosts that meet the specified criteria. For this parameter, if you do not specify the required host type, the default is type==any.

Resource requirement strings in select sections must conform to a more strict syntax. The strict resource requirement syntax only applies to the select section. It does not apply to the other resource requirement sections (order, rusage, same, span, or cu). LSF rejects resource requirement strings where an rusage section contains a non-consumable resource.

The criteria is only evaluated once, when a host is exported.

NHOSTS

Syntax

NHOSTS=integer

Description

Required when exporting workstations.

Maximum number of hosts to export. If there are not this many hosts meeting the selection criteria, LSF exports as many as it can.

DISTRIBUTION

Syntax

DISTRIBUTION=([cluster_name, number_shares]...)

Description

Required. Specifies how the exported resources are distributed among consumer clusters.

The syntax for the distribution list is a series of share assignments. The syntax of each share assignment is the cluster name, a comma, and the number of shares, all enclosed in square brackets, as shown. Use a space to separate multiple share assignments. Enclose the full distribution list in a set of round brackets.

cluster_name
Specify the name of a remote cluster that will be allowed to use the exported resources. If you specify a local cluster, the assignment is ignored.
number_shares
Specify a positive integer representing the number of shares of exported resources assigned to the cluster.

The number of shares assigned to a cluster is only meaningful when you compare it to the number assigned to other clusters, or to the total number. The total number of shares is just the sum of all the shares assigned in each share assignment.

MEM

Syntax

MEM=megabytes

Description

Used when exporting special hosts. Specify the amount of memory to export on each host, in MB or in units set in LSF_UNIT_FOR_LIMITS in lsf.conf.

Default

- (provider and consumer clusters compete for available memory)

SLOTS

Syntax

SLOTS=integer

Description

Required when exporting special hosts. Specify the number of job slots to export on each host.

To avoid overloading a partially exported host, you can reduce the number of job slots in the configuration of the local cluster.

SWAP

Syntax

SWAP=megabytes

Description

Used when exporting special hosts. Specify the amount of swap space to export on each host, in MB or in units set in LSF_UNIT_FOR_LIMITS in lsf.conf.

Default

- (provider and consumer clusters compete for available swap space)

TYPE

Syntax

TYPE=shared

Description

Changes the lease type from exclusive to shared.

If you export special hosts with a shared lease (using PER_HOST), you cannot specify multiple consumer clusters in the distribution policy.

Default

Undefined (the lease type is exclusive; exported resources are never available to the provider cluster)

SharedResourceExport section

Note: This section is deprecated and might be removed in a future version of LSF.

Optional. Requires HostExport section. Defines an export policy for a shared resource. Defines how much of the shared resource is exported, and the distribution among the consumers.

The shared resource must be available on hosts defined in the HostExport sections.

SharedResourceExport section structure

All parameters are required.

Example SharedResourceExport section

Begin SharedResourceExport 
NAME= AppRes
NINSTANCES= 10 
DISTRIBUTION= ([C1, 30] [C2, 70])
End SharedResourceExport

Parameters

  • NAME
  • NINSTANCES
  • DISTRIBUTION

NAME

Syntax

NAME=shared_resource_name

Description

Shared resource to export. This resource must be available on the hosts that are exported to the specified clusters; you cannot export resources without hosts.

NINSTANCES

Syntax

NINSTANCES=integer

Description

Maximum quantity of shared resource to export. If the total number available is less than the requested amount, LSF exports all that are available.

DISTRIBUTION

Syntax

DISTRIBUTION=([cluster_name, number_shares]...)

Description

Specifies how the exported resources are distributed among consumer clusters.

The syntax for the distribution list is a series of share assignments. The syntax of each share assignment is the cluster name, a comma, and the number of shares, all enclosed in square brackets, as shown. Use a space to separate multiple share assignments. Enclose the full distribution list in a set of round brackets.
cluster_name
Specify the name of a cluster allowed to use the exported resources.
number_shares
Specify a positive integer representing the number of shares of exported resources assigned to the cluster.

The number of shares assigned to a cluster is only meaningful when you compare it to the number assigned to other clusters, or to the total number. The total number of shares is the sum of all the shares assigned in each share assignment.

ResourceReservation section

By default, only LSF administrators or root can add or delete advance reservations.

The ResourceReservation section defines an advance reservation policy. It specifies:
  • Users or user groups that can create reservations
  • Hosts that can be used for the reservation
  • Time window when reservations can be created

Each advance reservation policy is defined in a separate ResourceReservation section, so it is normal to have multiple ResourceReservation sections in lsb.resources.

Example ResourceReservation section

Only user1 and user2 can make advance reservations on hostA and hostB. The reservation time window is between 8:00 AM and 6:00 PM every day:
Begin ResourceReservation 
NAME        = dayPolicy 
USERS       = user1 user2     # optional 
HOSTS       = hostA hostB     # optional 
TIME_WINDOW = 8:00-18:00      # weekly recurring reservation 
End ResourceReservation
user1 can add the following reservation for user user2 to use on hostA every Friday between 9:00 AM and 11:00 AM:
% user1@hostB> brsvadd -m "hostA" -n 1 -u "user2" -t "5:9:0-5:11:0" 
Reservation "user2#2" is created

Users can only delete reservations they created themselves. In the example, only user user1 can delete the reservation; user2 cannot. Administrators can delete any reservations created by users.

Parameters

  • HOSTS
  • NAME
  • TIME_WINDOW
  • USERS

HOSTS

Syntax

HOSTS=[~]host_name | [~]host_group | all | allremote | all@cluster_name ...

Description

A space-separated list of hosts, host groups defined in lsb.hosts on which administrators or users specified in the USERS parameter can create advance reservations.

The hosts can be local to the cluster or hosts leased from remote clusters.

If a group contains a subgroup, the reservation configuration applies to each member in the subgroup recursively.

Use the keyword all to configure reservation policies that apply to all local hosts in a cluster not explicitly excluded. This is useful if you have a large cluster but you want to use the not operator (~) to exclude a few hosts from the list of hosts where reservations can be created.

Use the keyword allremote to specify all hosts borrowed from all remote clusters.
Tip:

You cannot specify host groups or host partitions that contain the allremote keyword.

Use all@cluster_name to specify the group of all hosts borrowed from one remote cluster. You cannot specify a host group or partition that includes remote resources.

With multicluster resource leasing models, the not operator (~) can be used to exclude local hosts or host groups. You cannot use the not operator (~) with remote hosts.

Examples

HOSTS=hgroup1 ~hostA hostB hostC

Advance reservations can be created on hostB, hostC, and all hosts in hgroup1 except for hostA:

HOSTS=all ~group2 ~hostA

Advance reservations can be created on all hosts in the cluster, except for hostA and the hosts in group2.

Default

all allremote (users can create reservations on all server hosts in the local cluster, and all leased hosts in a remote cluster).

NAME

Syntax

NAME=text

Description

Required. Name of the ResourceReservation section

Specify any ASCII string 40 characters or less. You can use letters, digits, underscores (_) or dashes (-). You cannot use blank spaces.

Example

NAME=reservation1

Default

None. You must provide a name for the ResourceReservation section.

TIME_WINDOW

Syntax

TIME_WINDOW=time_window ...

Description

Optional. Time window for users to create advance reservations. The time for reservations that users create must fall within this time window.

Use the same format for time_window as the recurring reservation option (-t) of brsvadd. To specify a time window, specify two time values separated by a hyphen (-), with no space in between:
time_window = begin_time-end_time [time_zone]

Time format

Times are specified in the format:
[day:]hour[:minute]
where all fields are numbers with the following ranges:
  • day of the week: 0-6 (0 is Sunday)
  • hour: 0-23
  • minute: 0-59

time_zone, which is optional, is the time zone for the time. LSF supports all standard time zone abbreviations. If you do not specify a time zone, LSF uses the local system time zone.

Specify a time window one of the following ways:
  • hour-hour [time_zone]
  • hour:minute-hour:minute [time_zone]
  • day:hour:minute-day:hour:minute [time_zone]

The default value for minute is 0 (on the hour); the default value for day is every day of the week.

You must specify at least the hour. Day of the week and minute are optional. Both the start time and end time values must use the same syntax. If you do not specify a minute, LSF assumes the first minute of the hour (:00). If you do not specify a day, LSF assumes every day of the week. If you do specify the day, you must also specify the minute.

You can specify multiple time windows, but they cannot overlap. For example:
timeWindow(8:00-14:00 18:00-22:00)
is correct, but
timeWindow(8:00-14:00 11:00-15:00)
is not valid.

If you specify a time zone for multiple time windows, all time window entries must be consistent in whether they set the time zones. That is, either all entries must set a time zone, or all entries must not set a time zone. For example:

timeWindow(8:00-14:00 EDT 18:00-22:00 EDT)
is correct, but
timeWindow(8:00-14:00 18:00-22:00 EDT)
is not valid.

Example

TIME_WINDOW=8:00-14:00

Users can create advance reservations with begin time (brsvadd -b), end time (brsvadd -e), or time window (brsvadd -t) on any day between 8:00 AM and 2:00 PM.

Default

Undefined (any time)

USERS

Syntax

USERS=[~]user_name | [~]user_group ... | all

Description

A space-separated list of user names or user groups who are allowed to create advance reservations. Administrators, root, and all users or groups listed can create reservations.

If a group contains a subgroup, the reservation policy applies to each member in the subgroup recursively.

User names must be valid login names. User group names can be LSF user groups or UNIX and Windows user groups.

Use the keyword all to configure reservation policies that apply to all users or user groups in a cluster. This is useful if you have a large number of users but you want to exclude a few users or groups from the reservation policy.

Use the not operator (~) to exclude users or user groups from the list of users who can create reservations.

CAUTION:
The not operator does not exclude LSF administrators from the policy.

Example

USERS=user1 user2

Default

all (all users in the cluster can create reservations)

ReservationUsage section

To enable greater flexibility for reserving numeric resources that are reserved by jobs, configure the ReservationUsage section in lsb.resources to reserve resources as PER_JOB, PER_TASK, or PER_HOST. For example:

Example ReservationUsage section

Begin ReservationUsage 
RESOURCE             METHOD        RESERVE
resourceX             PER_JOB           Y
resourceY             PER_HOST          N
resourceZ             PER_TASK          N
End ReservationUsage

Parameters

  • RESOURCE
  • METHOD
  • RESERVE

RESOURCE

The name of the resource to be reserved. User-defined numeric resources can be reserved, but only if they are shared (they are not specific to one host).

The following built-in resources can be configured in the ReservationUsage section and reserved:
  • mem
  • tmp
  • swp

Any custom resource can also be reserved if it is shared (defined in the Resource section of lsf.shared) or host based (listed in the Host section of the lsf.cluster file in the resource column).

METHOD

The resource reservation method. One of:
  • PER_JOB
  • PER_HOST
  • PER_TASK

The cluster-wide RESOURCE_RESERVE_PER_SLOT parameter in lsb.params is obsolete.

RESOURCE_RESERVE_PER_TASK parameter still controls resources not configured in lsb.resources. Resources not reserved in lsb.resources are reserved by job if it is a shared resource or by host if it is a host-based resource.

PER_HOST reservation means that for the parallel job, LSF reserves one instance of a for each host. For example, some application licenses are charged only once no matter how many applications are running provided those applications are running on the same host under the same user.

Use no method ("-") when setting mem, swp, or tmp as RESERVE=Y.

RESERVE

Reserves the resource for pending jobs that are waiting for another resource to become available.

For example, job A requires resources X, Y, and Z to run, but resource Z is a high demand or scarce resource. This job pends until Z is available. In the meantime, other jobs requiring only X and Y resources run. If X and Y are set as reservable resources (the RESERVE parameter is set to "Y"), as soon as Z resource is available, job A runs. If they are not, job A may never be able to run because all resources are never available at the same time.

Restriction: Only the following built-in resources can be defined as reservable:
  • mem
  • swp
  • tmp

Use no method (-) when setting mem, swp, or tmp as RESERVE=Y.

When submitting a job, the queue must have RESOURCE_RESERVE defined.

Backfill of the reservable resources is also supported when you submit a job with reservable resources to a queue with BACKFILL defined.

Valid values are Y and N. If not specified, resources are not reserved.

Assumptions and limitations

  • Per-resource configuration defines resource usage for individual resources, but it does not change any existing resource limit behavior (PER_JOB or PER_TASK).
  • In the LSF multicluster capability environment, you should configure resource usage in the scheduling cluster (submission cluster in lease model or receiving cluster in job forward model).

Automatic time-based configuration

Variable configuration is used to automatically change LSF configuration based on time windows. You define automatic configuration changes in lsb.resources by using if-else constructs and time expressions. After you change the files, reconfigure the cluster with the badmin reconfig command.

The expressions are evaluated by LSF every 10 minutes based on mbatchd start time. When an expression evaluates true, LSF dynamically changes the configuration based on the associated configuration statements. Reconfiguration is done in real time without restarting mbatchd, providing continuous system availability.

Example

# limit usage of hosts for group and time 
# based configuration
# - 10 jobs can run from normal queue
# - any number can run from short queue between 18:30 
#   and 19:30
#   all other hours you are limited to 100 slots in the 
#   short queue
# - each other queue can run 30 jobs
Begin Limit
PER_QUEUE               HOSTS       SLOTS     # Example
normal                  Resource1    10
#if time(18:30-19:30 EDT)     
short                   Resource1    -  
#else
short                   Resource1    100
#endif    
(all ~normal ~short)    Resource1    30     
End Limit

Specifying the time zone is optional. If you do not specify a time zone, LSF uses the local system time zone. LSF supports all standard time zone abbreviations.

PowerPolicy section

This section enables and defines a power management policy.

Example PowerPolicy section

Begin PowerPolicy 
NAME          = policy_night
HOSTS         = hostGroup1 host3
TIME_WINDOW   = 23:00-8:00 EDT
MIN_IDLE_TIME = 1800
CYCLE_TIME = 60
End PowerPolicy 

Parameters

  • NAME
  • HOSTS
  • TIME_WINDOW
  • MIN_IDLE_TIME
  • CYCLE_TIME

NAME

Syntax

NAME=string

Description

Required. Unique name for the power management policy.

Specify any ASCII string 60 characters or less. You can use letters, digits, underscores (_), dashes (-), or periods (.). You cannot use blank spaces.

Example

NAME=policy_night1

Default

None. You must provide a name to define a power policy.

HOSTS

Syntax

HOSTS=host_list

Description

host_list is a space-separated list of host names, host groups, host partitions, or compute units.

Required. Specified hosts should not overlap between power policies.

Example

HOSTS=hostGroup1 host3

Default

If a host is not defined, the default value is all other hosts that are not included in power policy. (Does not contain the management and management candidate hosts)

TIME_WINDOW

Syntax

TIME_WINDOW=time_window ...

Description

Required. Time window is the time period to which the power policy applies.

To specify a time window, specify two time values separated by a hyphen (-), with no space in between
time_window = begin_time-end_time

Time format

Times are specified in the format:
[day:]hour[:minute]
where all fields are numbers with the following ranges:
  • day of the week: 0-6 (0 is Sunday)
  • hour: 0-23
  • minute: 0-59
Specify a time window one of the following ways:
  • hour-hour
  • hour:minute-hour:minute
  • day:hour:minute-day:hour:minute

The default value for minute is 0 (on the hour); the default value for day is every day of the week.

You must specify at least the hour. Day of the week and minute are optional. Both the start time and end time values must use the same syntax. If you do not specify a minute, LSF assumes the first minute of the hour (:00). If you do not specify a day, LSF assumes every day of the week. If you do specify the day, you must also specify the minute.

You can specify multiple time windows, but they cannot overlap. For example:
timeWindow(8:00-14:00 18:00-22:00)
is correct, but
timeWindow(8:00-14:00 11:00-15:00)
is not valid.

Example

TIME_WINDOW=8:00-14:00

Default

Not defined (any time)

MIN_IDLE_TIME

Syntax

MIN_IDLE_TIME=minutes

Description

This parameter takes effect if TIME_WINDOW is configured and is valid. It defines the host idle time before power operations are issued for defined hosts.

Example

MIN_IDLE_TIME=60

Default

0

CYCLE_TIME

Syntax

CYCLE_TIME=minutes

Description

This parameter takes effect if TIME_WINDOW is configured and is valid. It defines the minimum time (in minutes) between changes in power states for defined hosts.

Example

CYCLE_TIME=15

Default

0