lsf.licensescheduler

The lsf.licensescheduler file contains LSF License Scheduler configuration information. All sections except ProjectGroup are required. In cluster mode, the Project section is also not required.

Changing lsf.licensescheduler configuration

After making any changes to lsf.licensescheduler, run the following commands:
  • If you added, changed, or deleted the FEATURE_DELTA parameter, reconfigure the bld daemon.
  • If you have added, changed, or deleted any Feature or Projects sections, you may need to restart the mbatchd daemon. In this case a message is written to the log file prompting the restart.

    To restart the mbatchd daemon on each LSF cluster, run badmin mbdrestart.

Parameters section

Description

Required. Defines License Scheduler configuration parameters. If there is a parameter with the same name in the Feature section, setting these parameters in the Feature section overrides the global setting in the Parameters section.

Parameters section structure

The Parameters section begins and ends with the lines Begin Parameters and End Parameters. Each subsequent line describes one configuration parameter. The following parameters are required:

Begin Parameters
ADMIN=lsadmin 
HOSTS=hostA hostB hostC 
LMSTAT_PATH=/etc/flexlm/bin 
RLMSTAT_PATH=/etc/rlm/bin
LM_STAT_INTERVAL=30 
PORT=9581 
End Parameters 

ADMIN

Syntax

ADMIN=user_name ...

Description

Defines the LSF License Scheduler administrator using a valid UNIX user account. You can specify multiple accounts.

Used for both project mode and cluster mode.

AUTH

Syntax

AUTH=Y | N

Description

Enables LSF License Scheduler user authentication for projects for taskman jobs.

For detailed information about the taskman command, see taskman.

Used for both project mode and cluster mode.

BLC_HEARTBEAT_FACTOR

Syntax

BLC_HEARTBEAT_FACTOR=integer

Description

Enables bld to detect blcollect failure. Defines the number of times that bld receives no response from a license collector daemon (blcollect) before bld resets the values for that collector to zero. Each license usage reported to bld by the collector is treated as a heartbeat.

Used for both project mode and cluster mode.

Default

3

CHECKOUT_FROM_FIRST_HOST_ONLY

Syntax

CHECKOUT_FROM_FIRST_HOST_ONLY=Y | N

Description

If enabled (CHECKOUT_FROM_FIRST_HOST_ONLY=Y), LSF License Scheduler only considers user@host information for the first execution host of a parallel job when merging the license usage data. Setting in individual Feature sections overrides the global setting in the Parameters section.

If disabled, LSF License Scheduler attempts to check out user@host keys in the parallel job constructed using the user name and all execution host names, and merges the corresponding checkout information on the service domain if found. In addition, if MERGE_BY_SERVICE_DOMAIN=Y is defined, LSF License Scheduler merges multiple user@host data for parallel jobs across different service domains.

Default

Undefined (N).LSF License Scheduler attempts to check out user@host keys in the parallel job constructed using the user name and all execution host names, and merges the corresponding checkout information on the service domain if found.

CLUSTER_MODE

Syntax

CLUSTER_MODE=Y | N

Description

Enables cluster mode (instead of project mode) in License Scheduler. Setting in individual Feature sections overrides the global setting in the Parameters section.

Cluster mode emphasizes high utilization of license tokens above other considerations such as ownership. License ownership and sharing can still be configured, but within each cluster instead of across multiple clusters. Preemption of jobs (and licenses) also occurs within each cluster instead of across clusters.

Default

Not defined (N). LSF License Scheduler runs in project mode.

DEMAND_LIMIT

Syntax

DEMAND_LIMIT=integer

Description

Sets a limit to which LSF License Scheduler considers the demand by each project in each cluster when allocating licenses.

Used for project mode only.

When enabled, the demand limit helps prevent LSF License Scheduler from allocating more licenses to a project than can actually be used, which reduces license waste by limiting the demand that LSF License Scheduler considers. This is useful in cases when other resource limits are reached, LSF License Scheduler allocates more tokens than IBM Spectrum LSF can actually use because jobs are still pending due to lack of other resources.

When disabled (that is, DEMAND_LIMIT=0 is set), LSF License Scheduler takes into account all the demand reported by each cluster when scheduling.

DEMAND_LIMIT does not affect the DEMAND that blstat displays. Instead, blstat displays the entire demand sent for a project from all clusters. For example, one cluster reports a demand of 15 for a project. Another cluster reports a demand of 20 for the same project. When LSF License Scheduler allocates licenses, it takes into account a demand of five from each cluster for the project and the DEMAND that blstat displays is 35.

Periodically, each cluster sends a demand for each project. This is calculated in a cluster for a project by summing up the rusage of all jobs of the project pending due to lack of licenses. Whether to count a job's rusage in the demand depends on the job's pending reason. In general, the demand reported by a cluster only represents a potential demand from the project. It does not take into account other resources that are required to start a job. For example, a demand for 100 licenses is reported for a project. However, if LSF License Scheduler allocates 100 licenses to the project, the project does not necessarily use all 100 licenses due to slot available, limits, or other scheduling constraints.

In project mode, mbatchd in each cluster sends a demand for licenses from each project.DEMAND_LIMIT limits the amount of demand from each project in each cluster that is considered when scheduling.

Default

5

DISTRIBUTION_POLICY_VIOLATION_ACTION

Syntax

DISTRIBUTION_POLICY_VIOLATION_ACTION=(PERIOD reporting_period CMD reporting_command)

reporting_period

Specify the keyword PERIOD with a positive integer representing the interval (a multiple of LM_STAT_INTERVAL periods) at which LSF License Scheduler checks for distribution policy violations.

reporting_command

Specify the keyword CMD with the directory path and command that LSF License Scheduler runs when reporting a violation.

Description

Defines how LSF License Scheduler handles distribution policy violations. Distribution policy violations are caused by non-LSF workloads; License Scheduler explicitly follows its distribution policies.

LSF License Scheduler reports a distribution policy violation when the total number of licenses given to the LSF workload, both free and in use, is less than the LSF workload distribution specified in WORKLOAD_DISTRIBUTION. If LSF License Scheduler finds a distribution policy violation, it creates or overwrites the LSF_LOGDIR/bld.violation.service_domain_name.log file and runs the user command specified by the CMD keyword.

Used for project mode only.

Example

The LicenseServer1 service domain has a total of 80 licenses, and its workload distribution and enforcement is configured as follows:

Begin Parameter
...
DISTRIBUTION_POLICY_VIOLATION_ACTION=(PERIOD 5 CMD /bin/mycmd)
...
End Parameter
Begin Feature
NAME=ApplicationX 
DISTRIBUTION=LicenseServer1(Lp1 1 Lp2 2)
WORKLOAD_DISTRIBUTION=LicenseServer1(LSF 8 NON_LSF 2) 
End Feature

According to this configuration, 80% of the available licenses, or 64 licenses, are available to the LSF workload. LSF License Scheduler checks the service domain for a violation every five scheduling cycles, and runs the /bin/mycmd command if it finds a violation.

If the current LSF workload license usage is 50 and the number of free licenses is 10, the total number of licenses assigned to the LSF workload is 60. This is a violation of the workload distribution policy because this is less than the specified LSF workload distribution of 64 licenses.

ENABLE_INTERACTIVE

Syntax

ENABLE_INTERACTIVE=Y | N

Description

Globally enables one share of the licenses for interactive tasks.
Tip: Setting ENABLE_INTERACTIVE also allows taskman to detect and use batch resources.

Used for project mode only.

Default

Not defined. LSF License Scheduler allocates licenses equally to each cluster and does not distribute licenses for interactive tasks.

FAST_DISPATCH

Syntax

FAST_DISPATCH=Y | N

Description

Enables fast dispatch project mode for the license feature, which increases license utilization for project licenses.

Used for project mode only.

When enabled, LSF License Scheduler does not have to run lmutil, lmstat, rlmutil, or rlmstat to verify that a license is free before each job dispatch. As soon as a job finishes, the cluster can reuse its licenses for another job of the same project, which keeps gaps between jobs small. However, because LSF License Scheduler does not run lmutil, lmstat, rlmutil, or rlmstat to verify that the license is free, there is an increased chance of a license checkout failure for jobs if the license is already in use by a job in another project.

The fast dispatch project mode supports the following parameters in the Feature section:

  • ALLOCATION
  • DEMAND_LIMIT
  • DISTRIBUTION
  • GROUP_DISTRIBUTION
  • LM_LICENSE_NAME
  • LS_FEATURE_PERCENTAGE
  • NAME
  • NON_SHARED_DISTRIBUTION
  • SERVICE_DOMAINS
  • WORKLOAD_DISTRIBUTION

The fast dispatch project mode also supports the MBD_HEARTBEAT_INTERVAL parameter in the Parameters section.

Other parameters are not supported, including those that project mode supports, such as the following parameters:

  • ACCINUSE_INCLUDES_OWNERSHIP
  • DYNAMIC
  • GROUP
  • LOCAL_TO
  • LS_ACTIVE_PERCENTAGE

Default

Undefined (N). License Scheduler runs in project mode without fast dispatch.

HEARTBEAT_INTERVAL

Syntax

HEARTBEAT_INTERVAL=seconds

Description

The time interval between bld heartbeats indicating the bld is still running.

Default

60 seconds

HEARTBEAT_TIMEOUT

Syntax

HEARTBEAT_TIMEOUT=seconds

Description

The time a child bld waits to hear from the parent bld before assuming it has died.

Default

120 seconds

HIST_HOURS

Syntax

HIST_HOURS=hours

Description

Determines the rate of decay the accumulated use value used in fairshare and preemption decisions. When HIST_HOURS=0, accumulated use is not decayed.

Accumulated use is displayed by the blstat command under the heading ACUM_USE.

Used for project mode only.

Default

5 hours. Accumulated use decays to 1/10 of the original value over 5 hours.

HOSTS

Syntax

HOSTS=host_name.domain_name ...

Description

Defines LSF License Scheduler hosts, including candidate hosts.

Specify a fully qualified host name such as hostX.mycompany.com. You can omit the domain name if all your LSF License Scheduler clients run in the same DNS domain.

Used for both project mode and cluster mode.

INUSE_FROM_RUSAGE

Syntax

INUSE_FROM_RUSAGE=Y | N

Description

When not defined or set to N, the INUSE value uses rusage from bsub job submissions merged with license checkout data reported by blcollect (as reported by blstat).

When INUSE_FROM_RUSAGE=Y, the INUSE value uses the rusage from bsub job submissions instead of waiting for the blcollect update. This can result in faster reallocation of tokens when using dynamic allocation (when ALLOC_BUFFER is set).

Used for cluster mode only.

Default

N

LIB_CONNTIMEOUT

Syntax

LIB_CONNTIMEOUT=seconds

Description

Specifies a timeout value in seconds for communication between License Scheduler and LSF APIs. LIB_CONNTIMEOUT=0 indicates no timeout.

Used for both project mode and cluster mode.

Default

5 seconds

LIB_RECVTIMEOUT

Syntax

LIB_RECVTIMEOUT=seconds

Description

Specifies a timeout value in seconds for communication between License Scheduler and LSF. LIB_RECVTIMEOUT=0 indicates no timeout.

Used for both project mode and cluster mode.

Default

5 seconds

LM_REMOVE_INTERVAL

Syntax

LM_REMOVE_INTERVAL=seconds

Description

Specifies the minimum time a job must have a license checked out before lmremove (for FlexNet) or rlmremove (for Reprise License Manager) can remove the license (using preemption). lmremove or rlmremove causes the license manager daemon and vendor daemons to close the TCP connection with the application.

LSF License Scheduler only considers preempting a job after this interval has elapsed.

When using lmremove or rlmremove as part of the preemption action (LM_REMOVE_SUSP_JOBS), define LM_REMOVE_INTERVAL=0 to ensure that LSF License Scheduler can preempt a job immediately after checkout. After suspending the job, LSF License Scheduler then uses lmremove or rlmremove to release licenses from the job.

Used for both project mode and cluster mode.

Default

180 seconds

LM_REMOVE_SUSP_JOBS

Syntax

LM_REMOVE_SUSP_JOBS=seconds

Description

Enables LSF License Scheduler to use lmremove (for FlexNet) or rlmremove (for Reprise License Manager) to remove license features from each recently-suspended job. After enabling this parameter, the preemption action is to suspend the job's processes and use lmremove or rlmremove to remove licences from the application.

LSF License Scheduler continues to try removing the license feature for the specified number of seconds after the job is first suspended. When setting this parameter for an application, specify a value greater than the period following a license checkout that lmremove or rlmremove will fail for the application. This ensures that when a job suspends, its licenses are released. This period depends on the application.

When using lmremove or rlmremove as part of the preemption action, define LM_REMOVE_INTERVAL=0 to ensure that LSF License Scheduler can preempt a job immediately after checkout. After suspending the job, LSF License Scheduler then uses lmremove or rlmremove to release licenses from the job.

This parameter applies to all features in project mode.

Used for project mode only.

Default

Undefined. The default preemption action is to send a TSTP signal to the job.

LM_REMOVE_SUSP_JOBS_INTERVAL

Syntax

LM_REMOVE_SUSP_JOBS_INTERVAL=seconds

Description

Specifies the minimum length of time between subsequent child processes that LSF License Scheduler forks to run lmremove (for FlexNet) or rlmremove (for Reprise License Manager) every time it receives an update from a license collector daemon (blcollect).

Use this parameter when using lmremove or rlmremove as part of the preemption action (LM_REMOVE_SUSP_JOBS).

Used for project mode only.

Default

0. Uses the value of LM_STAT_INTERVAL instead.

LM_RESERVATION

Syntax

LM_RESERVATION=Y | N

Description

Enables LSF License Scheduler to support the FlexNet Manager license reservation keyword (RESERVE).

When LM_RESERVATION=Y is defined, LSF License Scheduler treats the RESERVE value in the FlexNet Manager license option file as OTHERS tokens instead of FREE tokens. The RESERVE value is now included in the OTHERS value in the blstat command output and is no longer included in the FREE value.

This parameter is ignored if it is defined in a time based configuration, or if the WORKLOAD_DISTRIBUTION parameter is defined in at least one feature.

Note: The license tokens that are reserved with FlexNet Manager must be used outside of the LSF License Scheduler cluster.

Default

N. The RESERVE value does not count as a used token.

LM_STAT_INTERVAL

Syntax

LM_STAT_INTERVAL=seconds

Description

Defines a time interval between calls that LSF License Scheduler makes to collect license usage information from the license manager.

Default

60 seconds

LM_STAT_TIMEOUT

Syntax

LM_STAT_TIMEOUT=seconds

Description

Sets the timeout value passed to the lmutil lmstat, lmstat, rlmutil rlmstat, or rlmstat command. The Parameters section setting is overwritten by the ServiceDomain setting, which is overwritten by the command line setting (blcollect -t timeout).

Used for both project mode and cluster mode.

Default

180 seconds

LM_TYPE

Syntax

LM_TYPE=FLEXLM | RLM

Description

Defines the license manager system that is used by the license servers. This determines how LSF License Scheduler communicates with the license servers.

Define LM_TYPE=FLEXLM if the license servers are using FlexNet Manager as the license manager system.

Define LM_TYPE=RLM if the license servers are using Reprise License Manager as the license manager system.

Default

FLEXLM

LMREMOVE_SUSP_JOBS (Obsolete)

Syntax

LMREMOVE_SUSP_JOBS=seconds

Description

Use LM_REMOVE_SUSP_JOBS instead. This parameter is only maintained for backwards compatibility.

LMREMOVE_SUSP_JOBS_INTERVAL (Obsolete)

Syntax

LMREMOVE_SUSP_JOBS_INTERVAL=seconds

Description

Replace LMREMOVE_SUSP_JOBS_INTERVAL with LM_REMOVE_SUSP_JOBS_INTERVAL instead. LMREMOVE_SUSP_JOBS_INTERVAL is only maintained for backwards compatibility.

LMSTAT_PATH

Syntax

LMSTAT_PATH=path

Description

Defines the full path to the location of the FlexNet command lmutil (or lmstat).

Used for project mode and cluster mode.

LOG_EVENT

Syntax

LOG_EVENT=Y | N

Description

Enables logging of License Scheduler events in the bld.stream file.

Default

Not defined. Information is not logged.

LOG_INTERVAL

Syntax

LOG_INTERVAL=seconds

Description

The interval between token allocation data logs in the data directory

Default

60 seconds

LS_DEBUG_BLC

Syntax

LS_DEBUG_BLC=log_class

Description

Sets the debugging log class for the LSF License Scheduler blcollect daemon.

Used for both project mode and cluster mode.

Specifies the log class filtering to be applied to blcollect. Only messages belonging to the specified log class are recorded.

LS_DEBUG_BLC sets the log class and is used in combination with LS_LOG_MASK, which sets the log level. For example:
LS_LOG_MASK=LOG_DEBUG LS_DEBUG_BLC="LC_TRACE" 
To specify multiple log classes, use a space-separated list enclosed in quotation marks. For example:
LS_DEBUG_BLC="LC_TRACE"

You need to restart the blcollect daemons after setting LS_DEBUG_BLC for your changes to take effect.

Valid values

Valid log classes are:
  • LC_AUTH and LC2_AUTH: Log authentication messages
  • LC_COMM and LC2_COMM: Log communication messages
  • LC_FLEX - Log everything related to FLEX_STAT or FLEX_EXEC Flexera APIs
  • LC_PERFM and LC2_PERFM: Log performance messages
  • LC_PREEMPT - Log license preemption policy messages
  • LC_RESREQ and LC2_RESREQ: Log resource requirement messages
  • LC_SYS and LC2_SYS: Log system call messages
  • LC_TRACE and LC2_TRACE: Log significant program walk steps
  • LC_XDR and LC2_XDR: Log everything transferred by XDR

Default

Not defined.

LS_DEBUG_BLD

Syntax

LS_DEBUG_BLD=log_class

Description

Sets the debugging log class for the LSF License Scheduler bld daemon.

Used for both project mode and cluster mode.

Specifies the log class filtering to be applied to bld. Messages belonging to the specified log class are recorded. Not all debug message are controlled by log class.

LS_DEBUG_BLD sets the log class and is used in combination with MASK, which sets the log level. For example:
LS_LOG_MASK=LOG_DEBUG LS_DEBUG_BLD="LC_TRACE" 
To specify multiple log classes, use a space-separated list enclosed in quotation marks. For example:
LS_DEBUG_BLD="LC_TRACE"

You need to restart the bld daemon after setting LS_DEBUG_BLD for your changes to take effect.

If you use the command bladmin blddebug to temporarily change this parameter without changing lsf.licensescheduler, you do not need to restart the daemons.

Valid values

Valid log classes are:

  • LC_AUTH and LC2_AUTH: Log authentication messages
  • LC_COMM and LC2_COMM: Log communication messages
  • LC_FLEX - Log everything related to FLEX_STAT or FLEX_EXEC Flexera APIs
  • LC_MEMORY - Log memory use messages
  • LC_PREEMPT - Log license preemption policy messages
  • LC_RESREQ and LC2_RESREQ: Log resource requirement messages
  • LC_TRACE and LC2_TRACE: Log significant program walk steps
  • LC_XDR and LC2_XDR: Log everything transferred by XDR

Valid values

Valid log classes are the same as for LS_DEBUG_CMD.

Default

Not defined.

LS_ENABLE_MAX_PREEMPT

Syntax

LS_ENABLE_MAX_PREEMPT=Y | N

Description

Enables maximum preemption time checking for LSF and taskman jobs.

When LS_ENABLE_MAX_PREEMPT is disabled, preemption times for taskman jobs are not checked regardless of the value of parameters LS_MAX_TASKMAN_PREEMPT in lsf.licensescheduler and MAX_JOB_PREEMPT in lsb.queues, lsb.applications, or lsb.params.

Used for project mode only.

Default

N

LS_LOG_MASK

Syntax

LS_LOG_MASK=message_log_level

Description

Specifies the logging level of error messages for LSF License Scheduler daemons. If LS_LOG_MASK is not defined in lsf.licensescheduler, the value of LSF_LOG_MASK in lsf.conf is used. If neither LS_LOG_MASK nor LSF_LOG_MASK is defined, the default is LOG_WARNING.

Used for both project mode and cluster mode.

For example:
LS_LOG_MASK=LOG_DEBUG
The log levels in order from highest to lowest are:
  • LOG_ERR
  • LOG_WARNING
  • LOG_INFO
  • LOG_DEBUG
  • LOG_DEBUG1
  • LOG_DEBUG2
  • LOG_DEBUG3

The most important LSF License Scheduler log messages are at the LOG_WARNING level. Messages at the LOG_DEBUG level are only useful for debugging.

Although message log level implements similar functionality to UNIX syslog, there is no dependency on UNIX syslog. It works even if messages are being logged to files instead of syslog.

LSF License Scheduler logs error messages in different levels so that you can choose to log all messages, or only log messages that are deemed critical. The level specified by LS_LOG_MASK determines which messages are recorded and which are discarded. All messages logged at the specified level or higher are recorded, while lower level messages are discarded.

For debugging purposes, the level LOG_DEBUG contains the fewest number of debugging messages and is used for basic debugging. The level LOG_DEBUG3 records all debugging messages, and can cause log files to grow very large; it is not often used. Most debugging is done at the level LOG_DEBUG2.

Default

LOG_WARNING

LS_MAX_STREAM_FILE_NUMBER

Syntax

LS_MAX_STREAM_FILE_NUMBER=integer

Description

Sets the number of saved bld.stream.time_stamp log files. When LS_MAX_STREAM_FILE_NUMBER=2, for example, the two most recent files are kept along with the current bld.stream file.

Used for both project mode and cluster mode.

Default

0 (old bld.stream file is not saved)

LS_MAX_STREAM_SIZE

Syntax

LS_MAX_STREAM_SIZE=integer

Description

Defines the maximum size of the bld.stream file in MB. Once this size is reached an EVENT_END_OF_STREAM is logged, a new bld.stream file is created, and the old bld.stream file is renamed bld.stream.time_stamp.

Used for both project mode and cluster mode.

Default

1024

LS_MAX_TASKMAN_PREEMPT

Syntax

LS_MAX_TASKMAN_PREEMPT=integer

Description

Defines the maximum number of times taskman jobs can be preempted.

Maximum preemption time checking for all jobs is enabled by LS_ENABLE_MAX_PREEMPT.

Used for project mode only.

Default

unlimited

LS_MAX_TASKMAN_SESSIONS

Syntax

LS_MAX_TASKMAN_SESSIONS=integer

Description

Defines the maximum number of taskman jobs that run simultaneously. This prevents system-wide performance issues that occur if there are a large number of taskman jobs running in License Scheduler.

The number of taskman sessions must be a positive integer.

The actual maximum number of taskman jobs is affected by the operating system file descriptor limit. Make sure the operating system file descriptor limit and the maximum concurrent connections are large enough to support all taskman tasks, License Scheduler (bl*) commands, and connections between LSF License Scheduler and LSF.

Used for both project mode and cluster mode.

LS_PREEMPT_PEER

Syntax

LS_PREEMPT_PEER=Y | N

Description

Enables bottom-up license token preemption in hierarchical project group configuration. LSF License Scheduler attempts to preempt tokens from the closest projects in the hierarchy first. This balances token ownership from the bottom up.

Used for project mode only.

Default

Not defined. Token preemption in hierarchical project groups is top down.

LS_ROOT_USER

Syntax

LS_ROOT_USER=Y | y | N | n

Description

UNIX only. Enables the root user to run LSF License Scheduler commands as a valid user from the LSF command line.

If you need to temporarily run LSF License Scheduler commands with root privileges, specify LS_ROOT_USER=Y in the Parameters section of the lsf.licensescheduler file. When you are done, you must disable this parameter to ensure that your cluster remains secure.

This parameter allows the root user to run the taskman command under globauth and the bladmin command as a valid user.

Important: Only enable LS_ROOT_USER=Y as a temporary configuration setting. When you are done, you must disable this parameter to ensure that your cluster remains secure.

Default

N. Root has no permission to execute the taskman command under globauth and the bladmin commands (except bladmin ckconfig).

LS_STREAM_FILE

Syntax

LS_STREAM_FILE=path

Used for both project mode and cluster mode.

Description

Defines the full path and filename of the bld event log file, bld.stream by default.

Default

LSF_TOP/work/db/bld.stream

MBD_HEARTBEAT_INTERVAL

Syntax

MBD_HEARTBEAT_INTERVAL=seconds

Description

Sets the length of time the cluster license allocation remains unchanged after a cluster has disconnected from bld. After MBD_HEARTBEAT_INTERVAL has passed, the allocation is set to zero and licenses are redistributed to other clusters.

Used for cluster mode and project mode only.

Default

900 seconds

MBD_REFRESH_INTERVAL

Syntax

MBD_REFRESH_INTERVAL=seconds

Description

This parameter allows the administrator to independently control the minimum interval between load updates from bld and from LIM. MBD_REFRESH_INTERVAL controls the frequency of scheduling interactive (taskman) jobs and is read by the mbatchd daemon on startup. When MBD_REFRESH_INTERVAL is set or changed, you must restart the bld and mbatchd daemons in each cluster.

Used for both project mode and cluster mode.

Default

15 seconds

MERGE_BY_SERVICE_DOMAIN

Syntax

MERGE_BY_SERVICE_DOMAIN=Y | N

Description

If enabled, correlates job license checkout with the lmutil lmstat, lmstat, rlmutil rlmstat, or rlmstat output across all service domains first before reserving licenses.

In project mode (but not fast dispatch project mode), this parameter supports the case where the checkout license number for the application is less than or equal to the job rusage. If the checked out licenses are greater than the job rusage, the ENABLE_DYNAMIC_RUSAGE parameter is still required.

Default

N (Does not correlate job license checkout with the lmutil, lmstat, rlmutil, or rlmstat output across all service domains before reserving licenses)

PEAK_INUSE_PERIOD

Syntax

PEAK_INUSE_PERIOD=seconds

Description

Defines the interval over which a peak INUSE value is determined for dynamic license allocation in cluster mode for all license features over all service domains.

When defining the interval for LSF Advanced Edition submission clusters, the interval is determined for the entire LSF Advanced Edition mega-cluster (the submission cluster and its execution clusters).

Used for cluster mode only.

When defined in both the Parameters section and the Feature section, the Feature section definition is used for that license feature.

Default

300 seconds

PORT

Syntax

PORT=integer

Description

Defines the TCP listening port used by LSF License Scheduler hosts, including candidate License Scheduler hosts. Specify any non-privileged port number.

Used for both project mode and cluster mode.

PREEMPT_ACTION

Syntax

PREEMPT_ACTION=action

Description

Specifies the action used for taskman job preemption.

By default, if PREEMPT_ACTION is not configured, bld sends a TSTP signal to preempt taskman jobs.

You can specify a script using this parameter. For example, PREEMPT_ACTION = /home/user1/preempt.s runs preempt.s when preempting a taskman job.

Used for project mode only.

Default

Not defined. A TSTP signal is used to preempt taskman jobs.

PROJECT_GROUP_PATH

Syntax

PROJECT_GROUP_PATH=Y | N

Description

Enables hierarchical project group paths for project mode, which enables the following:

  • Features can use hierarchical project groups with project and project group names that are not unique, as long as the projects or project groups do not have the same parent. That is, you can define projects and project groups in more than one hierarchical project group.
  • When specifying -Lp license_project, you can use paths to describe the project hierarchy without specifying the root group.

    For example, if you have root as your root group, which has a child project group named groupA with a project named proj1, you can use -Lp /groupA/proj1 to specify this project.

  • Hierarchical project groups have a default project named others with a default share value of 0. Any projects that do not match the defined projects in a project group are assigned into the others project.

    If there is already a project named others, the preexisting others project specification overrides the default project.

If LSF_LIC_SCHED_STRICT_PROJECT_NAME (in lsf.conf) and PROJECT_GROUP_PATH are both defined, PROJECT_GROUP_PATH takes precedence and overrides the LSF_LIC_SCHED_STRICT_PROJECT_NAME behavior.

Note: To use PROJECT_GROUP_PATH, you need LSF, Version 9.1.1, or later.

Used for project mode only.

Default

Not defined (N).

REMOTE_LMSTAT_PROTOCOL

Syntax

REMOTE_LMSTAT_PROTOCOL=ssh [ssh_command_options] | rsh [rsh_command_options] | lsrun [lsrun_command_options]

Description

Specifies the method that LSF License Scheduler uses to connect to the remote agent host if there are remote license servers that need a remote agent host to collect license information.

If there are remote license servers that need a remote agent host to collect license information, LSF License Scheduler uses the specified command (and optional command options) to connect to the agent host. LSF License Scheduler automatically appends the name of the remote agent host to the command, so there is no need to specify the host with the command.

Note: LSF License Scheduler does not validate the specified command, so you must ensure that you correctly specify the command. The blcollect log file notes that the command failed, but not any details on the connection error. To determine specific connection errors, manually specify the command to connect to the remote server before specifying it in REMOTE_LMSTAT_PROTOCOL.

If using lsrun as the connection method, the remote agent host must be a server host in the LSF cluster and RES must be started on this host. If using ssh or rsh as the connection method, the agent host does not have to be a server host in the LSF cluster.

REMOTE_LMSTAT_PROTOCOL works with REMOTE_LMSTAT_SERVERS, which defines the remote license servers and remote agent hosts. If you do not define REMOTE_LMSTAT_SERVERS, REMOTE_LMSTAT_PROTOCOL is not used.

Used for both project mode and cluster mode.

Default

ssh

RLMSTAT_PATH

Syntax

RLMSTAT_PATH=path

Description

Defines the full path to the location of the Reprise License Manager commands.

Used for both project mode and cluster mode.

Default

If not defined, this is set to LMSTAT_PATH.

STANDBY_CONNTIMEOUT

Syntax

STANDBY_CONNTIMEOUT=seconds

Description

Sets the connection timeout the standby bld waits when trying to contact each host before assuming the host is unavailable.

Used for both project mode and cluster mode.

Default

5 seconds

Clusters section

Description

Required. Lists the clusters that can use License Scheduler.

When configuring clusters for a WAN, the Clusters section of the parent cluster must define its child clusters.

The Clusters section is the same for both project mode and cluster mode.

Clusters section structure

The Clusters section begins and ends with the lines Begin Clusters and End Clusters. The second line is the column heading, CLUSTERS. Subsequent lines list participating clusters, one name per line:

Begin Clusters 
CLUSTERS 
cluster1 
cluster2
End Clusters

CLUSTERS

Defines the name of each participating LSF cluster. Specify using one name per line.

ServiceDomain section

Description

Required. Defines License Scheduler service domains as groups of physical license server hosts that serve a specific network.

The ServiceDomain section is the same for both project mode and cluster mode.

ServiceDomain section structure

Define a section for each License Scheduler service domain.

This example shows the structure of the section:

Begin ServiceDomain 
NAME=DesignCenterB 
LIC_SERVERS=((1888@hostD)(1888@hostE)) 
LIC_COLLECTOR=CenterB 
End ServiceDomain

Parameters

  • LIC_SERVERS
  • LIC_COLLECTOR
  • LM_STAT_INTERVAL
  • LM_STAT_TIMEOUT
  • LM_TYPE
  • NAME
  • REMOTE_LMSTAT_SERVERS

LIC_SERVERS

Syntax

When using FlexNet as the license manager (LM_TYPE=FLEXLM): LIC_SERVERS=([(host_name | port_number@host_name |(port_number@host_name port_number@host_name port_number@host_name))] ...)

When using Reprise License Manager as the license manager (LM_TYPE=RLM): LIC_SERVERS=([( port_number@host_name |(port_number@host_name port_number@host_name port_number@host_name))] ...)

Description

Defines the license server hosts that make up the LSF License Scheduler service domain. Specify one or more license server hosts, and for each license server host, specify the number of the port that the license manager uses, then the at symbol (@), then the name of the host. Put one set of parentheses around the list, and one more set of parentheses around each host, unless you have redundant servers (three hosts sharing one license file). If you have redundant servers, the parentheses enclose all three hosts.

If LSF License Scheduler is using FlexNet as the license manager (that is, LM_TYPE=FLEXLM), and FlexNet uses the default port on a host, you can specify the host name without the port number.

If LSF License Scheduler is using Reprise License Manager as the license manager (that is, LM_TYPE=RLM), you must specify a port number for every license server host.

Used for both project mode and cluster mode.

Examples

  • One FlexNet license server host:
    LIC_SERVERS=((1700@hostA))
    
  • Multiple license server hosts with unique license.dat files:
    LIC_SERVERS=((1700@hostA)(1700@hostB)(1700@hostC))
    
  • Redundant license server hosts sharing the same license.dat file:
    LIC_SERVERS=((1700@hostD 1700@hostE 1700@hostF))
    

LIC_COLLECTOR

Syntax

LIC_COLLECTOR=license_collector_name

Description

Defines a name for the license collector daemon (blcollect) to use in each service domain. blcollect collects license usage information from FlexNet and passes it to the License Scheduler daemon (bld). It improves performance by allowing you to distribute license information queries on multiple hosts.

You can only specify one collector per service domain, but you can specify one collector to serve multiple service domains. Each time you run blcollect, you must specify the name of the collector for the service domain. You can use any name you want.

Used for both project mode and cluster mode.

Default

Undefined. The License Scheduler daemon uses one license collector daemon for the entire cluster.

LM_STAT_INTERVAL

Syntax

LM_STAT_INTERVAL=seconds

Description

Defines a time interval between calls that License Scheduler makes to collect license usage information from the license manager.

The value specified for a service domain overrides the global value defined in the Parameters section. Each service domain definition can specify a different value for this parameter.

Used for both project mode and cluster mode.

Default

License Scheduler applies the global value defined in the Parameters section.

LM_STAT_TIMEOUT

Syntax

LM_STAT_TIMEOUT=seconds

Description

Sets the timeout value passed to the lmutil lmstat, lmstat, rlmutil rlmstat, or rlmstat command. The Parameters section setting is overwritten by the ServiceDomain setting, which is overwritten by the command line setting (blcollect -t timeout).

When using Reprise License Manager as the license manager (LM_TYPE=RLM), this parameter is ignored.

Used for both project mode and cluster mode.

Default

180 seconds

LM_TYPE

Syntax

LM_TYPE=FLEXLM | RLM

Description

Defines the license manager system that is used by the license servers. This determines how LSF License Scheduler communicates with the license servers that are defined by the LIC_SERVERS parameter.

Define LM_TYPE=FLEXLM if the license servers are using FlexNet Manager as the license manager system.

Define LM_TYPE=RLM if the license servers are using Reprise License Manager as the license manager system. When LM_TYPE=RLM is defined, LIC_SERVERS must define port_number@host_name (that is, LIC_SERVERS must define a port number). Defining just the host name (or @host_name) without the port number is not allowed.

Default

FLEXLM

NAME

Defines the name of the service domain.

Used for both project mode and cluster mode.

REMOTE_LMSTAT_SERVERS

Syntax

REMOTE_LMSTAT_SERVERS=host_name[(host_name ...)] [host_name[(host_name ...)] ...]

Description

Defines the remote license servers and, optionally, the remote agent hosts that serve these remote license servers.

A remote license server is a license server that does not run on the same domain as the license collector. A remote agent host serves remote license servers within the same domain, allowing the license collector to get license information on the remote license servers with a single remote connection.

Defining remote agent hosts are useful when there are both local and remote license servers because it is slower for the license collector to connect to multiple remote license servers to get license information than it is to connect to local license servers. The license collector connects to the remote agent host (using the command specified by the REMOTE_LMSTAT_PROTOCOL parameter) and calls lmutil, lmstat, rlmutil, or rlmstat to collect license information from the license servers that the agent hosts serve. This allows the license collector to connect to one remote agent host to get license information from all the remote license servers on the same domain as the remote agent host. These license servers should be in the same subnet as the agent host to improve access.

Remote license servers must also be license servers defined in LIC_SERVERS. Any remote license servers defined in REMOTE_LMSTAT_SERVERS that are not also defined in LIC_SERVERS are ignored. Remote agent hosts that serve other license servers do not need to be defined in LIC_SERVERS. Remote agent hosts that are not defined in LIC_SERVERS function only as remote agents and not as license servers.

If you specify a remote agent host without additional servers (that is, the remote agent host does not serve any license servers), the remote agent host is considered to be a remote license server with itself as the remote agent host. That is, the license collector connects to the remote agent host and only gets license information on the remote agent host. Because these hosts are remote license servers, these remote agent hosts must also be defined as license servers in LIC_SERVERS, or they will be ignored.

Used for both project mode and cluster mode.

Examples

  • One local license server (hostA) and one remote license server (hostB):
    LIC_SERVERS=((1700@hostA)(1700@hostB))
    REMOTE_LMSTAT_SERVERS=hostB
    
    • The license collector runs lmutil, lmstat, rlmutil, or rlmstat directly on hostA to get license information on hostA.
    • Because hostB is defined without additional license servers, hostB is a remote agent host that only serves itself. The license collector connects to hostB (using the command specified by the REMOTE_LMSTAT_PROTOCOL parameter) and runs lmutil, lmstat, rlmutil, or rlmstat to get license information on 1700@hostB.
  • One local license server (hostA), one remote agent host (hostB) that serves one remote license server (hostC), and one remote agent host (hostD) that serves two remote license servers (hostE and hostF):
    LIC_SERVERS=((1700@hostA)(1700@hostB)(1700@hostC)(1700@hostD)(1700@hostE)(1700@hostF))
    REMOTE_LMSTAT_SERVERS=hostB(hostC) hostD(hostE hostF)
    
    • The license collector runs lmutil, lmstat, rlmutil, or rlmstat directly to get license information from 1700@hostA, 1700@hostB, and 1700@hostD.
    • The license collector connects to hostB (using the command specified by the REMOTE_LMSTAT_PROTOCOL parameter) and runs lmutil, lmstat, rlmutil, or rlmstat to get license information on 1700@hostC.

      hostB and hostC should be in the same subnet to improve access.

    • The license collector connects to hostD (using the command specified by the REMOTE_LMSTAT_PROTOCOL parameter) and runs lmutil, lmstat, rlmutil, or rlmstat to get license information on 1700@hostE and 1700@hostF.

      hostD, hostE, and hostF should be in the same subnet to improve access.

  • One local license server (hostA), one remote license server (hostB), and one remote agent host (hostC) that serves two remote license servers (hostD and hostE):
    LIC_SERVERS=((1700@hostA)(1700@hostB)(1700@hostC)(1700@hostD)(1700@hostE))
    REMOTE_LMSTAT_SERVERS=hostB hostC(hostD hostE)
    
    • The license collector runs lmutil, lmstat, rlmutil, or rlmstat directly to get license information on 1700@hostA and 1700@hostC.
    • The license collector connects to hostB (using the command specified by the REMOTE_LMSTAT_PROTOCOL parameter) and runs lmutil, lmstat, rlmutil, or rlmstat to get license information on 1700@hostB.
    • The license collector connects to hostC (using the command specified by the REMOTE_LMSTAT_PROTOCOL parameter) and runs lmutil, lmstat, rlmutil, or rlmstat to get license information on 1700@hostD and 1700@hostE.

      hostC, hostD, and hostE should be in the same subnet to improve access.

Feature section

Description

Required. Defines license distribution policies.

Feature section structure

Define a section for each feature managed by License Scheduler. If there is a parameter with the same name in the Parameters section, setting these parameters in the Feature section overrides the global setting in the Parameters section.

Begin Feature 
NAME=vcs 
LM_LICENSE_NAME=vcs 
...
Distribution policy
Parameters
...
End Feature

Parameters

  • ACCINUSE_INCLUDES_OWNERSHIP
  • ALLOC_BUFFER
  • ALLOCATION
  • CHECKOUT_FROM_FIRST_HOST_ONLY
  • CLUSTER_DISTRIBUTION
  • CLUSTER_MODE
  • DEMAND_LIMIT
  • DISABLE_PREEMPTION
  • DISTRIBUTION
  • DYNAMIC
  • ENABLE_DYNAMIC_RUSAGE
  • ENABLE_MINJOB_PREEMPTION
  • FAST_DISPATCH
  • FEATURE_DELTA
  • FLEX_NAME
  • GROUP
  • GROUP_DISTRIBUTION
  • INUSE_FROM_RUSAGE
  • LM_LICENSE_NAME
  • LM_REMOVE_INTERVAL
  • LM_REMOVE_SUSP_JOBS
  • LM_RESERVATION
  • LMREMOVE_SUSP_JOBS
  • LOCAL_TO
  • LS_ACTIVE_PERCENTAGE
  • LS_FEATURE_PERCENTAGE
  • NAME
  • NON_SHARED_DISTRIBUTION
  • PEAK_INUSE_PERIOD
  • PREEMPT_ORDER
  • PREEMPT_RESERVE
  • RETENTION_FACTOR
  • SERVICE_DOMAINS
  • WORKLOAD_DISTRIBUTION

ACCINUSE_INCLUDES_OWNERSHIP

Syntax

ACCINUSE_INCLUDES_OWNERSHIP=Y | N

Description

When not defined, accumulated use is incremented each scheduling cycle by (tokens in use) + (tokens reserved) if this exceeds the number of tokens owned.

When defined, accumulated use is incremented each scheduling cycle by (tokens in use) + (tokens reserved) regardless of the number of tokens owned.

This is useful for projects that have a very high ownership set when considered against the total number of tokens available for LSF workload. Projects can be starved for tokens when the ownership is set too high and this parameter is not set.

Accumulated use is displayed by the blstat command under the heading ACUM_USE.

Used only for project mode. Cluster mode and fast dispatch project mode do not track accumulated use.

Default

N, not enabled.

ALLOC_BUFFER

Syntax

ALLOC_BUFFER = buffer | cluster_name buffer ... default buffer

Description

Enables dynamic distribution of licenses across clusters in cluster mode.

Cluster names must be the names of clusters defined in the Clusters section of lsf.licensescheduler.

Used for cluster mode only.

ALLOC_BUFFER=buffer sets one buffer size for all clusters, while ALLOC_BUFFER=cluster_name buffer ... sets a different buffer size for each cluster.

The buffer size is used during dynamic redistribution of licenses. Increases in allocation are determined by the PEAK value, and increased by DEMAND for license tokens to a maximum increase of BUFFER, the buffer size configured by ALLOC_BUFFER. The license allocation can increase in steps as large as the buffer size, but no larger.

Allocation buffers help determine the maximum rate at which tokens can be transferred to a cluster as demand increases in the cluster. The maximum rate of transfer to a cluster is given by the allocation buffer divided by MBD_REFRESH_INTERVAL. Be careful not to set the allocation buffer too large so that licenses are not wasted because they are being allocated to a cluster that cannot use them.

Decreases in license allocation can be larger than the buffer size, but the allocation must remain at PEAK+BUFFER licenses. The license allocation includes up to the buffer size of extra licenses, in case demand increases.

Increasing the buffer size allows the license allocation to grow faster, but also increases the number of licenses that may go unused at any given time. The buffer value must be tuned for each license feature and cluster to balance these two objectives.

When defining the buffer size for LSF Advanced Edition submission clusters, the license allocation for the entire LSF Advanced Edition mega-cluster (the submission cluster and its execution clusters) can increase in steps as large as the buffer size, but no larger.

Detailed license distribution information is shown in the blstat output.

Use the keyword default to apply a buffer size to all remaining clusters. For example:

Begin Feature
NAME = f1
CLUSTER_DISTRIBUTION = WanServers(banff 1 berlin 1 boston 1)
ALLOC_BUFFER = banff 10 default 5
End Feature

In this example, dynamic distribution is enabled. The cluster banff has a buffer size of 10, and all remaining clusters have a buffer size of 5.

To allow a cluster to be able to use licenses only when another cluster does not need them, you can set the cluster distribution for the cluster to 0, and specify an allocation buffer for the number of tokens that the cluster can request.

For example:

Begin Feature
CLUSTER_DISTRIBUTION=Wan(CL1 0 CL2 1)
ALLOC_BUFFER=5
End Feature

When no jobs are running, the token allocation for CL1 is 5. CL1 can get more than 5 tokens if CL2 does not require them.

Default

Not defined. Static distribution of licenses is used in cluster mode.

ALLOCATION

Syntax

ALLOCATION=project_name (cluster_name [number_shares] ... ) ...

cluster_name

Specify LSF cluster names or interactive tasks that licenses are to be allocated to.

project_name

Specify a License Scheduler project (described in the Projects section or as default) that is allowed to use the licenses.

number_shares

Specify a positive integer representing the number of shares assigned to the cluster.

The number of shares assigned to a cluster is only meaningful when you compare it to the number assigned to other clusters. The total number of shares is the sum of the shares assigned to each cluster.

Description

Defines the allocation of license features across clusters and interactive tasks.

Used for project mode only.

ALLOCATION ignores the global setting of the ENABLE_INTERACTIVE parameter because ALLOCATION is configured for the license feature.

You can configure the allocation of license shares to:

  • Change the share number between clusters for a feature
  • Limit the scope of license usage and change the share number between LSF jobs and interactive tasks for a feature

When defining the allocation of license features for LSF Advanced Edition submission clusters, the allocation is for the entire LSF Advanced Edition mega-cluster (the submission cluster and its execution clusters).

Tip: To manage interactive tasks in License Scheduler projects, use the LSF Task Manager, taskman. The Task Manager utility is supported by License Scheduler.

Default

If ENABLE_INTERACTIVE is not set, each cluster receives equal share, and interactive tasks receive no shares.

Examples

Each example contains two clusters and 12 licenses of a specific feature.

Example 1

ALLOCATION is not configured. The ENABLE_INTERACTIVE parameter is not set.

Begin Parameters
...
ENABLE_INTERACTIVE=n 
...
End Parameters
Begin Feature 
NAME=ApplicationX 
DISTRIBUTION=LicenseServer1 (Lp1 1) 
End Feature

Six licenses are allocated to each cluster. No licenses are allocated to interactive tasks.

Example 2

ALLOCATION is not configured. The ENABLE_INTERACTIVE parameter is set.

Begin Parameters
...
ENABLE_INTERACTIVE=y
...
End Parameters
Begin Feature 
NAME=ApplicationX 
DISTRIBUTION=LicenseServer1 (Lp1 1) 
End Feature

Four licenses are allocated to each cluster. Four licenses are allocated to interactive tasks.

Example 3

In the following example, the ENABLE_INTERACTIVE parameter does not affect the ALLOCATION configuration of the feature.

ALLOCATION is configured. The ENABLE_INTERACTIVE parameter is set.

Begin Parameters
...
ENABLE_INTERACTIVE=y 
...
End Parameters
Begin Feature 
NAME=ApplicationY 
DISTRIBUTION=LicenseServer1 (Lp2 1)
ALLOCATION=Lp2(cluster1 1 cluster2 0 interactive 1) 
End Feature

The ENABLE_INTERACTIVE setting is ignored. Licenses are shared equally between cluster1 and interactive tasks. Six licenses of ApplicationY are allocated to cluster1. Six licenses are allocated to interactive tasks.

Example 4

In the following example, the ENABLE_INTERACTIVE parameter does not affect the ALLOCATION configuration of the feature.

ALLOCATION is configured. The ENABLE_INTERACTIVE parameter is not set.

Begin Parameters
...
ENABLE_INTERACTIVE=n 
...
End Parameters
Begin Feature 
NAME=ApplicationZ 
DISTRIBUTION=LicenseServer1 (Lp1 1) 
ALLOCATION=Lp1(cluster1 0 cluster2 1 interactive 2) 
End Feature

The ENABLE_INTERACTIVE setting is ignored. Four licenses of ApplicationZ are allocated to cluster2. Eight licenses are allocated to interactive tasks.

CHECKOUT_FROM_FIRST_HOST_ONLY

Syntax

CHECKOUT_FROM_FIRST_HOST_ONLY=Y | N

Description

If enabled, LSF License Scheduler only considers user@host information for the first execution host of a parallel job when merging the license usage data. Setting in individual Feature sections overrides the global setting in the Parameters section.

If a feature has multiple Feature sections (using LOCAL_TO), each section must have the same setting for CHECKOUT_FROM_FIRST_HOST_ONLY.

If disabled, LSF License Scheduler attempts to check out user@host keys in the parallel job constructed using the user name and all execution host names, and merges the corresponding checkout information on the service domain if found. If MERGE_BY_SERVICE_DOMAIN=Y is defined, LSF License Scheduler also merges multiple user@host data for parallel jobs across different service domains.

Default

Undefined (N). LSF License Scheduler attempts to check out user@host keys in the parallel job constructed using the user name and all execution host names, and merges the corresponding checkout information on the service domain if found.

CLUSTER_DISTRIBUTION

Syntax

CLUSTER_DISTRIBUTION=service_domain(cluster shares/min/max ... )...

service_domain

Specify a License Scheduler WAN service domain (described in the ServiceDomain section) that distributes licenses to multiple clusters, and the share for each cluster.

Specify a License Scheduler LAN service domain for a single cluster.

cluster

Specify each LSF cluster that accesses licenses from this service domain.

shares

For each cluster specified for a WAN service domain, specify a positive integer representing the number of shares assigned to the cluster. (Not required for a LAN service domain.)

The number of shares assigned to a cluster is only meaningful when you compare it to the number assigned to other clusters, or to the total number assigned by the service domain. The total number of shares is the sum of the shares assigned to each cluster.

min

The minimum allocation is allocated exclusively to the cluster, and is similar to the non-shared allocation in project mode.

Cluster shares take precedence over minimum allocations configured. If the minimum allocation exceeds the cluster's share of the total tokens, a cluster's allocation as given by bld may be less than the configured minimum allocation.

max

Optionally, specify a positive integer representing the maximum number of license tokens allocated to the cluster when dynamic allocation is enabled for a WAN service domain (when ALLOC_BUFFER is definedfor the feature).

Description

CLUSTER_DISTRIBUTION must be defined when using cluster mode.

Defines the cross-cluster distribution policies for the license. The name of each service domain is followed by its distribution policy, in parentheses. The distribution policy determines how the licenses available in each service domain are distributed among the clients.

The distribution policy is a space-separated list with each cluster name followed by its share assignment. The share assignment determines what fraction of available licenses is assigned to each cluster, in the event of competition between clusters.

Examples

CLUSTER_DISTRIBUTION=wanserver(Cl1 1 Cl2 1 Cl3 1 Cl4 1)
CLUSTER_DISTRIBUTION = SD(C1 1 C2 1) SD1(C3 1 C4 1) SD2(C1 1) SD3(C2 1)

In these examples, wanserver, SD, and SD1 are WAN service domains, while SD2 and SD3 are LAN service domains serving a single cluster.

CLUSTER_MODE

Syntax

CLUSTER_MODE=Y | N

Description

Enables cluster mode (instead of project mode) for the license feature.

Cluster mode emphasizes high utilization of license tokens above other considerations such as ownership. License ownership and sharing can still be configured, but within each cluster instead of across multiple clusters. Preemption of jobs (and licenses) also occurs within each cluster instead of across clusters.

Cluster mode was introduced in License Scheduler 8.0. Before cluster mode was introduced, project mode was the only choice available.

Default

Undefined (N). License Scheduler runs in project mode.

DEMAND_LIMIT

Syntax

DEMAND_LIMIT=integer

Description

Sets a limit to which LSF License Scheduler considers the demand by each project in each cluster when allocating licenses.

Used for project mode only.

When enabled, the demand limit helps prevent LSF License Scheduler from allocating more licenses to a project than can actually be used, which reduces license waste by limiting the demand that LSF License Scheduler considers. This is useful in cases when other resource limits are reached, LSF License Scheduler allocates more tokens than IBM Spectrum LSF can actually use because jobs are still pending due to lack of other resources.

When disabled (that is, DEMAND_LIMIT=0 is set), LSF License Scheduler takes into account all the demand reported by each cluster when scheduling.

DEMAND_LIMIT does not affect the DEMAND that blstat displays. Instead, blstat displays the entire demand sent for a project from all clusters. For example, one cluster reports a demand of 15 for a project. Another cluster reports a demand of 20 for the same project. When LSF License Scheduler allocates licenses, it takes into account a demand of five from each cluster for the project and the DEMAND that blstat displays is 35.

Periodically, each cluster sends a demand for each project. This is calculated in a cluster for a project by summing up the rusage of all jobs of the project pending due to lack of licenses. Whether to count a job's rusage in the demand depends on the job's pending reason. In general, the demand reported by a cluster only represents a potential demand from the project. It does not take into account other resources that are required to start a job. For example, a demand for 100 licenses is reported for a project. However, if LSF License Scheduler allocates 100 licenses to the project, the project does not necessarily use all 100 licenses due to slot availability, limits, or other scheduling constraints.

The mbatchd daemon in each cluster sends a demand for licenses from each project. You can limit the amount of demand from each project in each cluster that is considered when scheduling by setting the demand limit. This helps prevent LSF License Scheduler from allocating more licenses to a project than can actually be used.

Default

5

DISABLE_PREEMPTION

Syntax

DISABLE_PREEMPTION=Y | y | N | n

Description

When set Y or y, DISABLE_PREEMPTION prevents license preemption and resets the project ownership to 0. This ensures that license preemption never occurs regardless of whether projects are configured with ownership or NON_SHARED.

Used for project mode only. Cluster mode does not support this parameter.

Default

N

DISTRIBUTION

Syntax

DISTRIBUTION=[service_domain_name([project_name number_shares[/number_licenses_owned]] ... [default] )] ...

service_domain_name
Specify a License Scheduler service domain (described in the ServiceDomain section) that distributes the licenses.
project_name
Specify a License Scheduler project (described in the Projects section) that is allowed to use the licenses.
number_shares
Specify a positive integer representing the number of shares assigned to the project.

The number of shares assigned to a project is only meaningful when you compare it to the number assigned to other projects, or to the total number assigned by the service domain. The total number of shares is the sum of the shares assigned to each project.

number_licenses_owned
Specify a slash (/) and a positive integer representing the number of licenses that the project owns. When configured, preemption is enabled and owned licenses are reclaimed using preemption when there is unmet demand.
default
A reserved keyword that represents the default project if the job submission does not specify a project (bsub -Lp), or the specified project is not configured in the Projects section of lsf.licensescheduler. Jobs that belong to projects do not get a share of the tokens when the project is not explicitly defined in DISTRIBUTION.

Description

Used for project mode and fast dispatch project mode only.

One of DISTRIBUTION or GROUP_DISTRIBUTION must be defined when using project mode. GROUP_DISTRIBUTION and DISTRIBUTION are mutually exclusive. If defined in the same feature, the License Scheduler daemon returns an error and ignores this feature.

Defines the distribution policies for the license. The name of each service domain is followed by its distribution policy, in parentheses. The distribution policy determines how the licenses available in each service domain are distributed among the clients.

You can only specify one service domain.

The distribution policy is a space-separated list with each project name followed by its share assignment. The share assignment determines what fraction of available licenses is assigned to each project, in the event of competition between projects. Optionally, the share assignment is followed by a slash and the number of licenses owned by that project. License ownership enables a preemption policy. In the event of competition between projects, projects that own licenses preempt jobs. Licenses are returned to the owner immediately.

Examples

DISTRIBUTION=wanserver (Lp1 1 Lp2 1 Lp3 1 Lp4 1)
In this example, the service domain named wanserver shares licenses equally among four projects. If all projects are competing for a total of eight licenses, each project is entitled to two licenses at all times. If all projects are competing for only two licenses in total, each project is entitled to a license half the time.
DISTRIBUTION=lanserver1 (Lp1 1 Lp2 2/6)

In this example, the service domain named lanserver1 allows Lp1 to use one third of the available licenses and Lp2 can use two thirds of the licenses. However, Lp2 is always entitled to six licenses, and can preempt another project to get the licenses immediately if they are needed. If the projects are competing for a total of 12 licenses, Lp2 is entitled to eight licenses (six on demand, and two more as soon as they are free). If the projects are competing for only six licenses in total, Lp2 is entitled to all of them, and Lp1 can only use licenses when Lp2 does not need them.

DYNAMIC

Syntax

DYNAMIC=Y | N

Description

If you specify DYNAMIC=Y, you must specify a duration in an rusage resource requirement for the feature. This enables License Scheduler to treat the license as a dynamic resource and prevents License Scheduler from scheduling tokens for the feature when they are not available, or reserving license tokens when they should actually be free.

Used for project mode only. Cluster mode does not support rusage duration.

ENABLE_DYNAMIC_RUSAGE

Syntax

ENABLE_DYNAMIC_RUSAGE=Y | N

Description

When set, ENABLE_DYNAMIC_RUSAGE enables all license checkouts for features where the job checks out licenses in excess of rusage to be considered managed checkouts, instead of unmanaged (or OTHERS).

Used for project mode only. Cluster mode and fast dispatch project mode do not support this parameter.

ENABLE_MINJOB_PREEMPTION

Syntax

ENABLE_MINJOB_PREEMPTION=Y | N

Description

Minimizes the overall number of preempted jobs by enabling job list optimization. For example, for a job that requires 10 licenses, License Scheduler preempts one job that uses 10 or more licenses rather than 10 jobs that each use one license.

Used for project mode only

Default

Undefined: License Scheduler does not optimize the job list when selecting jobs to preempt.

FAST_DISPATCH

Syntax

FAST_DISPATCH=Y | N

Description

Enables fast dispatch project mode for the license feature, which increases license utilization for project licenses.

Used for project mode only.

When enabled, LSF License Scheduler does not have to run lmutil, lmstat, rlmutil, or rlmstat to verify that a license is free before each job dispatch. As soon as a job finishes, the cluster can reuse its licenses for another job of the same project, which keeps gaps between jobs small. However, because LSF License Scheduler does not run lmutil, lmstat, rlmutil, or rlmstat to verify that the license is free, there is an increased chance of a license checkout failure for jobs if the license is already in use by a job in another project.

The fast dispatch project mode supports the following parameters in the Feature section:

  • ALLOCATION
  • DEMAND_LIMIT
  • DISTRIBUTION
  • GROUP_DISTRIBUTION
  • LM_LICENSE_NAME
  • LS_FEATURE_PERCENTAGE
  • NAME
  • NON_SHARED_DISTRIBUTION
  • SERVICE_DOMAINS
  • WORKLOAD_DISTRIBUTION

The fast dispatch project mode also supports the MBD_HEARTBEAT_INTERVAL parameter in the Parameters section.

Other parameters are not supported, including those that project mode supports, such as the following parameters:

  • ACCINUSE_INCLUDES_OWNERSHIP
  • DYNAMIC
  • GROUP
  • LOCAL_TO
  • LS_ACTIVE_PERCENTAGE

Default

Y

FEATURE_DELTA

Syntax

FEATURE_DELTA=service_domain_name(abs | mult number) ...

FEATURE_DELTA=service_domain_name(abs dyn([init([integer])] [max([integer])] [step([integer])] [dynstep([hist_time factor])] [target([number])] [interval([integer])]

Description

Allows an administrator to specify a number of dispatched jobs greater than actual tokens managed by the license server. Use this feature when utilization is low due to an external application's slow checkout of licenses. The result is higher average utilization at the cost of running some jobs less efficiently (those jobs that get dispatched but have to wait for the token).

Used for both project mode and cluster mode.

Each service domain can only be listed once in the FEATURE_DELTA parameter, otherwise all FEATURE_DELTA settings are ignored.

  • abs: Specifies an absolute number of dispatched jobs greater than tokens managed by the license server. Specify a positive integer.
  • mult: Specifies a multiplication factor of dispatched jobs greater than tokens managed by the license server. Specify a decimal number between 1 and 2. If you specify a number greater than 2, most of your jobs wait for a token and run less efficiently.
  • dyn: Use with the abs or mult keywords (instead of a number) to enable LSF License Scheduler to dynamically adjust the number of dispatched jobs greater than tokens managed by the license server.

    Enable dynamic adjustment for features that have a fluctuating license utilization and therefore require a more flexible value. The following optional parameters control this dynamic adjustment:

    • init: Specifies the initial number of dispatched jobs greater than tokens managed by the license server when the bld daemon starts up or there are no jobs running in the servers that are specified in the FEATURE_DELTA parameter. Specify an integer for abs or a decimal number for mult. The default value is 0 for abs and 1.0 for mult.
    • max: Specifies the adjustment number to calculate the maximum number of dispatched jobs greater than tokens managed by the license server. To obtain the adjusted maximum number, the value of max is added to the init value for abs and multiplied by the init value for mult. Specify an integer for abs or a decimal number for mult.

      The default value for abs is the same as the init value, and the default value for mult is 2.0. This means that the default adjusted maximum number is double the init value for both abs and mult.

    • step: Specifies the amount to adjust (if needed) in each interval. Specify an integer for abs or a decimal number for mult. The default value is either 1 or 5% of the init value, whichever is greater.
    • dynstep(hist_time factor): Use instead of step to specify a dynamic amount to adjust in each interval, where hist_time is the time window for LSF License Scheduler to calculate the average token utilization, in minutes and factor is the factor to adjust the dynamic amount to be more or less aggressive towards the target value.
    • target: Specifies the target license utilization rate. Specify a decimal number. The default value is 0.9.
    • interval: Specifies the license usage checking interval per collector update. That is, 1 indicates that LSF License Scheduler checks and adjusts every collector update, and 2 indicates that LSF License Scheduler checks and adjusts every 2 collector updates. The default value is 3 (LSF License Scheduler checks and adjusts every 3 collector updates).

Example

In lsf.licensescheduler, define the following Service domains and Feature section:

Begin ServiceDomain
NAME = LanServer
LIC_SERVERS = ((38000@bl10e-15) (38000@bl10e-16))
End ServiceDomain
Begin ServiceDomain
NAME = LanServer1
LIC_SERVERS = ((38000@bl10e-17))
End ServiceDomain
Begin Feature
NAME = f1
DISTRIBUTION = LanServer(p1 1  p2 1  p3 1) LanServer1(p1 1  p2 1)
FEATURE_DELTA = LanServer(mult 1.5) LanServer1(abs 4)  
End Feature

Result

If LanServer had 6 licenses and LanServer1 had 5 licenses, the number of tokens are now:

LanServer: (6*1.5)= 9

LanServer1: (5 + 4)=9

The output of blstat displays counts the total number, as added or multiplied by the value specified in FEATURE_DELTA.

FLEX_NAME (Obsolete)

Syntax

FLEX_NAME=feature_name1 [feature_name2 ...]

Description

Replace FLEX_NAME with LM_LICENSE_NAME. FLEX_NAME is only maintained for backwards compatibility.

GROUP

Syntax

GROUP=[group_name(project_name... )] ...

group_name

Specify a name for a group of projects. This is different from a ProjectGroup section; groups of projects are not hierarchical.

project_name

Specify a License Scheduler project (described in the Projects section) that is allowed to use the licenses. The project must appear in the DISTRIBUTION and only belong to one group.

Description

Defines groups of projects and specifies the name of each group. The groups defined here are used for group preemption. The number of licenses owned by the group is the total number of licenses owned by member projects.

Used only for project mode. Cluster mode and fast dispatch project mode do not track accumulated use.

This parameter is ignored if GROUP_DISTRIBUTION is also defined.

Example

For example, without the GROUP configuration shown, proj1 owns 4 license tokens and can reclaim them using preemption. After adding the GROUP configuration, proj1 and proj2 together own 8 license tokens. If proj2 is idle, proj1 is able to reclaim all 8 license tokens using preemption.

Begin Feature
NAME = AppY
DISTRIBUTION = LanServer1(proj1 1/4 proj2 1/4 proj3 2)
GROUP = GroupA(proj1 proj2)
End Feature

GROUP_DISTRIBUTION

Syntax

GROUP_DISTRIBUTION=top_level_hierarchy_name

top_level_hierarchy_name

Specify the name of the top level hierarchical group.

Description

Defines the name of the hierarchical group containing the distribution policy attached to this feature, where the hierarchical distribution policy is defined in a ProjectGroup section.

One of DISTRIBUTION or GROUP_DISTRIBUTION must be defined when using project mode. GROUP_DISTRIBUTION and DISTRIBUTION are mutually exclusive. If defined in the same feature, the License Scheduler daemon returns an error and ignores this feature.

If GROUP is also defined, it is ignored in favor of GROUP_DISTRIBUTION.

Example

The following example shows the GROUP_DISTRIBUTION parameter hierarchical scheduling for the top-level hierarchical group named groups. The SERVICE_DOMAINS parameter defines a list of service domains that provide tokens for the group.

Begin Feature 
NAME = myjob2 
GROUP_DISTRIBUTION = groups 
SERVICE_DOMAINS = LanServer wanServer 
End Feature

INUSE_FROM_RUSAGE

Syntax

INUSE_FROM_RUSAGE=Y | N

Description

When not defined or set to N, the INUSE value uses rusage from bsub job submissions merged with license checkout data reported by blcollect (as reported by blstat).

When INUSE_FROM_RUSAGE=Y, the INUSE value uses the rusage from bsub job submissions instead of waiting for the blcollect update. This can result in faster reallocation of tokens when using dynamic allocation (when ALLOC_BUFFER is set).

When used for individual license features, the Feature section setting overrides the global Parameters section setting.

Used for cluster mode only.

Default

N

LM_LICENSE_NAME

Syntax

LM_LICENSE_NAME=feature_name1 [feature_name2 ...]

Description

Defines the feature name—the name used by the license manager to identify the type of license. You only need to specify this parameter if the License Scheduler token name is not identical to the license manager feature name.

LM_LICENSE_NAME allows the NAME parameter to be an alias of the license manager feature name. For feature names that start with a number or contain a dash (-), you must set both NAME and LM_LICENSE_NAME, where LM_LICENSE_NAME is the actual license manager feature name, and NAME is an arbitrary license token name you choose.

Specify a space-delimited list of feature names in LM_LICENSE_NAME to combine multiple license manager features into one feature name specified under the NAME parameter. This allows you to use the same feature name for multiple license manager features (that are interchangeable for applications). LSF recognizes the alias of the combined feature (specified in NAME) as a feature name instead of the individual license manager feature names specified in LM_LICENSE_NAME. When submitting a job to LSF, users specify the combined feature name in the bsub rusage string, which allows the job to use any token from any of the features specified in LM_LICENSE_NAME.

Example

To specify AppZ201 as an alias for the license manager feature named 201-AppZ:

Begin Feature 
LM_LICENSE_NAME=201-AppZ
NAME=AppZ201 
DISTRIBUTION=LanServer1(Lp1 1 Lp2 1) 
End Feature

To combine two license manager features (201-AppZ and 202-AppZ) into a feature named AppZ201:

Begin Feature 
LM_LICENSE_NAME=201-AppZ 202-AppZ
NAME=AppZ201
DISTRIBUTION=LanServer1(Lp1 1 Lp2 1) 
End Feature

AppZ201 is a combined feature that uses both 201-AppZ and 202-AppZ tokens. Submitting a job with AppZ201 in the rusage string (for example, bsub -Lp Lp1 -R "rusage[AppZ201=2]" myjob) means that the job checks out tokens for either 201-AppZ or 202-AppZ.

LM_REMOVE_INTERVAL

Syntax

LM_REMOVE_INTERVAL=seconds

Description

Specifies the minimum time a job must have a license checked out before lmremove or rlmremove can remove the license. lmremove or rlmremove causes the license manager daemon and vendor daemons to close the TCP connection with the application. They can then retry the license checkout.

When using lmremove or rlmremove as part of the preemption action (LM_REMOVE_SUSP_JOBS), define LM_REMOVE_INTERVAL=0 to ensure that LSF License Scheduler can preempt a job immediately after checkout. After suspending the job, LSF License Scheduler then uses lmremove or rlmremove to release licenses from the job.

Used for both project mode and cluster mode.

The value specified for a feature overrides the global value defined in the Parameters section. Each feature definition can specify a different value for this parameter.

Default

Undefined: LSF License Scheduler applies the global value.

LM_REMOVE_SUSP_JOBS

Syntax

LM_REMOVE_SUSP_JOBS=seconds

Description

Enables LSF License Scheduler to use lmremove or rlmremove to remove license features from each recently-suspended job. After enabling this parameter, the preemption action is to suspend the job's processes and use lmremove or rlmremove to remove licences from the application. lmremove or rlmremove causes the license manager daemon and vendor daemons to close the TCP connection with the application.

LSF License Scheduler continues to try removing the license feature for the specified number of seconds after the job is first suspended. When setting this parameter for an application, specify a value greater than the period following a license checkout that lmremove or rlmremove will fail for the application. This ensures that when a job suspends, its licenses are released. This period depends on the application.

When using lmremove or rlmremove as part of the preemption action, define LM_REMOVE_INTERVAL=0 to ensure that LSF License Scheduler can preempt a job immediately after checkout. After suspending the job, LSF License Scheduler then uses lmremove or rlmremove to release licenses from the job.

Used for fast dispatch project mode only.

The value specified for a feature overrides the global value defined in the Parameters section. Each feature definition can specify a different value for this parameter.

Default

Undefined. The default preemption action is to send a TSTP signal to the job.

LM_RESERVATION

Syntax

LM_RESERVATION=Y | N

Description

Enables LSF License Scheduler to support the FlexNet Manager license reservation keyword (RESERVE).

When LM_RESERVATION=Y is defined, LSF License Scheduler treats the RESERVE value in the FlexNet Manager license option file as OTHERS tokens instead of FREE tokens. The RESERVE value is now included in the OTHERS value in the blstat command output and is no longer included in the FREE value.

This parameter is ignored if it is defined in a time based configuration, or if the WORKLOAD_DISTRIBUTION parameter is defined in this feature.

The value specified for a feature overrides the global value defined in the Parameters section. Each feature definition can specify a different value for this parameter.

Note: The license tokens that are reserved with FlexNet Manager must be used outside of LSF License Scheduler.

Default

Undefined: LSF License Scheduler applies the global value.

LMREMOVE_SUSP_JOBS (Obsolete)

Syntax

LMREMOVE_SUSP_JOBS=seconds

Description

Replace LMREMOVE_SUSP_JOBS with LM_REMOVE_SUSP_JOBS. LMREMOVE_SUSP_JOBS is only maintained for backwards compatibility.

LOCAL_TO

Syntax

LOCAL_TO=cluster_name | location_name(cluster_name [cluster_name ...])

Description

Used only for project mode. Cluster mode and fast dispatch project mode do not track accumulated use.

Configures token locality for the license feature. You must configure different feature sections for same feature based on their locality. By default, if LOCAL_TO is not defined, the feature is available to all clients and is not restricted by geographical location. When LOCAL_TO is configured, for a feature, License Scheduler treats license features served to different locations as different token names, and distributes the tokens to projects according the distribution and allocation policies for the feature.

LOCAL_TO cannot be defined for LSF Advanced Edition submission clusters.

LOCAL_TO allows you to limit features from different service domains to specific clusters, so License Scheduler only grants tokens of a feature to jobs from clusters that are entitled to them.

For example, if your license servers restrict the serving of license tokens to specific geographical locations, use LOCAL_TO to specify the locality of a license token if any feature cannot be shared across all the locations. This avoids having to define different distribution and allocation policies for different service domains, and allows hierarchical group configurations.

License Scheduler manages features with different localities as different resources. Use blinfo and blstat to see the different resource information for the features depending on their cluster locality.

License features with different localities must be defined in different feature sections. The same Service Domain can appear only once in the configuration for a given license feature.

A configuration like LOCAL_TO=Site1(clusterA clusterB) configures the feature for more than one cluster when using project mode.

A configuration like LOCAL_TO=clusterA configures locality for only one cluster. This is the same as LOCAL_TO=clusterA(clusterA).

Cluster names must be the names of clusters defined in the Clusters section of lsf.licensescheduler.

Examples

Begin Feature
NAME = hspice
DISTRIBUTION = SD1 (Lp1 1 Lp2 1)
LOCAL_TO = siteUS(clusterA clusterB)
End Feature
Begin Feature
NAME = hspice
DISTRIBUTION = SD2 (Lp1 1 Lp2 1)
LOCAL_TO = clusterA
End Feature
Begin Feature
NAME = hspice
DISTRIBUTION = SD3 (Lp1 1 Lp2 1) SD4 (Lp1 1 Lp2 1)
End Feature

Or use the hierarchical group configuration (GROUP_DISTRIBUTION):

Begin Feature
NAME = hspice
GROUP_DISTRIBUTION = group1
SERVICE_DOMAINS = SD1
LOCAL_TO = clusterA
End Feature
Begin Feature
NAME = hspice
GROUP_DISTRIBUTION = group1
SERVICE_DOMAINS = SD2
LOCAL_TO = clusterB
End Feature
Begin Feature
NAME = hspice
GROUP_DISTRIBUTION = group1
SERVICE_DOMAINS = SD3 SD4
End Feature

Default

Not defined. The feature is available to all clusters and taskman jobs, and is not restricted by cluster.

LS_ACTIVE_PERCENTAGE

Syntax

LS_ACTIVE_PERCENTAGE=Y | N

Description

Configures license ownership in percentages instead of absolute numbers and adjusts ownership for inactive projects. Sets LS_FEATURE_PERCENTAGE=Y automatically.

Setting LS_ACTIVE_PERCENTAGE=Y dynamically adjusts ownership based on project activity, setting ownership to zero for inactive projects and restoring the configured ownership setting when projects become active. If the total ownership for the license feature is greater than 100%, each ownership value is scaled appropriately for a total ownership of 100%.

Used only for project mode. Cluster mode and fast dispatch project mode do not track accumulated use.

Default

N (Ownership values are not changed based on project activity.)

LS_FEATURE_PERCENTAGE

Syntax

LS_FEATURE_PERCENTAGE=Y | N

Description

Configures license ownership in percentages instead of absolute numbers. When not combined with hierarchical projects, affects the owned values in DISTRIBUTION and the NON_SHARED_DISTRIBUTION values only.

When using hierarchical projects, percentage is applied to OWNERSHIP, LIMITS, and NON_SHARED values.

Used for project mode and fast dispatch project mode only.

For example:

Begin Feature
LS_FEATURE_PERCENTAGE = Y
DISTRIBUTION = LanServer (p1 1 p2 1 p3 1/20)
...
End Feature

The service domain LanServer shares licenses equally among three License Scheduler projects. P3 is always entitled to 20% of the total licenses, and can preempt another project to get the licenses immediately if they are needed.

Example 1

Begin Feature
LS_FEATURE_PERCENTAGE = Y
DISTRIBUTION = LanServer (p1 1 p2 1 p3 1/20)
...
End Feature

The service domain LanServer shares licenses equally among three License Scheduler projects. P3 is always entitled to 20% of the total licenses, and can preempt another project to get the licenses immediately if they are needed.

Example 2

With LS_FEATURE_PERCENTAGE=Y in feature section and using hierarchical project groups:

Begin ProjectGroup
GROUP      SHARES    OWNERSHIP    LIMITS  NON_SHARED
(R (A p4))  (1  1)     ()         ()         ()
(A (B p3))  (1  1)     (- 10)     (- 20)     ()
(B (p1 p2)) (1  1)     (30 -)     ()       (- 5)
End ProjectGroup

Project p1 owns 30% of the total licenseand project p3 owns 10% of total licenses. P3's LIMITS is 20% of total licenses, and p2's NON_SHARED is 5%.

Default

N (Ownership is not configured with percentages, but with absolute numbers.)

NAME

Syntax

NAME=feature_name

Required. Defines the token name—the name used by License Scheduler and LSF to identify the license feature.

Normally, license token names should be the same as the FlexNet Licensing feature names, as they represent the same license. However, LSF does not support names that start with a number, or names containing a dash or hyphen character (-), which may be used in the FlexNet Licensing feature name.

NON_SHARED_DISTRIBUTION

Syntax

NON_SHARED_DISTRIBUTION=service_domain_name ([project_name number_non_shared_licenses] ... ) ...

service_domain_name

Specify a License Scheduler service domain (described in the ServiceDomain section) that distributes the licenses.

project_name

Specify a License Scheduler project (described in the section) that is allowed to use the licenses.

number_non_shared_licenses

Specify a positive integer representing the number of non-shared licenses that the project owns.

Description

Defines non-shared licenses. Non-shared licenses are privately owned, and are not shared with other license projects. They are available only to one project.

Used for project mode and fast dispatch project mode only.

Use blinfo -a to display NON_SHARED_DISTRIBUTION information.

For projects defined with NON_SHARED_DISTRIBUTION, you must assign the project OWNERSHIP an equal or greater number of tokens than the number of non-shared licenses. If the number of owned licenses is less than the number of non-shared licenses, OWNERSHIP is set to the number of non-shared licenses.

Examples

  • If the number of tokens normally given to a project (to satisfy the DISTRIBUTION share ratio) is larger than its NON_SHARED_DISTRIBUTION value, the DISTRIBUTION share ratio takes effect first.
    Begin Feature
    NAME=f1 # total 15 on LanServer
    LM_LICENSE_NAME=VCS-RUNTIME
    DISTRIBUTION=LanServer(Lp1 4/10 Lp2 1)
    NON_SHARED_DISTRIBUTION=LanServer(Lp1 10)
    End Feature
    

    In this example, 10 non-shared licenses are defined for the Lp1 project on LanServer. The DISTRIBUTION share ratio for Lp1:Lp2 is 4:1. If there are 15 licenses, Lp1 will normally get 12 licenses, which is larger than its NON_SHARED_DISTRIBUTION value of 10. Therefore, the DISTRIBUTION share ratio takes effect, so Lp1 gets 12 licenses and Lp2 gets 3 licenses for the 4:1 share ratio.

  • If the number of tokens normally given to a project (to satisfy the DISTRIBUTION share ratio) is smaller than its NON_SHARED_DISTRIBUTION value, the project will first get the number of tokens equal to NON_SHARED_DISTRIBUTION, then the DISTRIBUTION share ratio for the other projects takes effect for the remaining licenses.
    • For one project with non-shared licenses and one project with no non-shared licenses: , the project with no non-shared licenses is given all the remaining licenses since it would normally be given more according to the DISTRIBUTION share ratio:
      Begin Feature
      NAME=f1 # total 15 on LanServer
      LM_LICENSE_NAME=VCS-RUNTIME
      DISTRIBUTION=LanServer(Lp1 1/10 Lp2 4)
      NON_SHARED_DISTRIBUTION=LanServer(Lp1 10)
      End Feature
      

      In this example, 10 non-shared licenses are defined for the Lp1 project on LanServer. The DISTRIBUTION share ratio for Lp1:Lp2 is 1:4. If there are 15 licenses, Lp1 will normally get three licenses, which is smaller than its NON_SHARED_DISTRIBUTION value of 10. Therefore, Lp1 gets the first 10 licenses, and Lp2 gets the remaining five licenses (since it would normally get more according to the share ratio).

    • For one project with non-shared licenses and two or more projects with no non-shared licenses, the two projects with no non-shared licenses are assigned the remaining licenses according to the DISTRIBUTION share ratio with each other, ignoring the share ratio for the project with non-shared licenses.
      Begin Feature
      NAME=f1 # total 15 on LanServer
      LM_LICENSE_NAME=VCS-RUNTIME
      DISTRIBUTION=LanServer(Lp1 1/10 Lp2 4 Lp3 2)
      NON_SHARED_DISTRIBUTION=LanServer(Lp1 10)
      End Feature
      

      In this example, 10 non-shared licenses are defined for the Lp1 project on LanServer. The DISTRIBUTION share ratio for Lp1:Lp2:Lp3 is 1:4:2. If there are 15 licenses, Lp1 will normally get two licenses, which is smaller than its NON_SHARED_DISTRIBUTION value of 10. Therefore, Lp1 gets the first 10 licenses. The remaining licenses are given to Lp2 and Lp3 to a ratio of 4:2, so Lp2 gets three licenses and Lp3 gets two licenses.

    • For two projects with non-shared licenses and one with no non-shared licenses, the one project with no non-shared licenses is given the remaining licenses after the two projects are given their non-shared licenses:
      Begin Feature
      NAME=f1 # total 15 on LanServer
      LM_LICENSE_NAME=VCS-RUNTIME
      DISTRIBUTION=LanServer(Lp1 1/10 Lp2 4 Lp3 2/5)
      NON_SHARED_DISTRIBUTION=LanServer(Lp1 10 Lp3 5)
      End Feature
      

      In this example, 10 non-shared licenses are defined for the Lp1 project and five non-shared license are defined for the Lp3 project on LanServer. The DISTRIBUTION share ratio for Lp1:Lp2:Lp3 is 1:4:2. If there are 15 licenses, Lp1 will normally get two licenses and Lp3 will normally get four licenses, which are both smaller than their corresponding NON_SHARED_DISTRIBUTION values. Therefore, Lp1 gets 10 licenses and Lp3 gets five licenses. Lp2 gets no licenses even though it normally has the largest share because Lp1 and Lp3 have non-shared licenses.

PEAK_INUSE_PERIOD

Syntax

PEAK_INUSE_PERIOD=seconds | cluster seconds ...

Description

Defines the interval over which a peak INUSE value is determined for dynamic license allocation in cluster mode for this license features and service domain.

Use keyword default to set for all clusters not specified, and the keyword interactive (in place of cluster name) to set for taskman jobs. For example:

PEAK_INUSE_PERIOD = cluster1 1000 cluster2 700 default 300

When defining the interval for LSF Advanced Edition submission clusters, the interval is determined for the entire LSF Advanced Edition mega-cluster (the submission cluster and its execution clusters).

Used for cluster mode only.

When defined in both the Parameters section and the Feature section, the Feature section definition is used for that license feature.

Default

300 seconds

PREEMPT_ORDER

Syntax

PREEMPT_ORDER=BY_OWNERSHIP

Description

Sets the preemption order based on configured OWNERSHIP.

Used for project mode only.

Default

Not defined.

PREEMPT_RESERVE

Syntax

PREEMPT_RESERVE=Y | N

Description

If PREEMPT_RESERVE=Y, enables License Scheduler to preempt either licenses that are reserved or already in use by other projects. The number of jobs must be greater than the number of licenses owned.

If PREEMPT_RESERVE=N, License Scheduler does not preempt reserved licenses.

Used for project mode only.

Default

Y. Reserved licenses are preemptable.

RETENTION_FACTOR

Syntax

RETENTION_FACTOR=integer%

Description

Ensures that when tokens are reclaimed from an overfed cluster, the overfed cluster still gets to dispatch additional jobs, but at a reduced rate. Specify the retention factor as a percentage of tokens to be retained by the overfed cluster.

For example:

Begin Feature
NAME = f1
CLUSTER_MODE = Y
CLUSTER_DISTRIBUTION = LanServer(LAN1 1 LAN2 1)
ALLOC_BUFFER = 20
RETENTION_FACTOR = 25%
End Feature

With RETENTION_FACTOR set, as jobs finish in the overfed cluster and free up tokens, at least 25% of the tokens can be reused by the cluster to dispatch additional jobs. Tokens not held by the cluster are redistributed to other clusters. In general, a higher value means that the process of reclaiming tokens from an overfed cluster takes longer, and an overfed cluster gets to dispatch more jobs while tokens are being reclaimed from it.

When the entire LSF Advanced Edition mega-cluster (the submission cluster and its execution clusters) is overfed, the number of retained tokens is from the entire LSF Advanced Edition mega-cluster.

Used for cluster mode only.

Default

Not defined

SERVICE_DOMAINS

Syntax

SERVICE_DOMAINS=service_domain_name ...

service_domain_name

Specify the name of the service domain.

Description

Required if GROUP_DISTRIBUTION is defined. Specifies the service domains that provide tokens for this feature.

Only a single service domain can be specified when using cluster mode or fast dispatch project mode.

WORKLOAD_DISTRIBUTION

Syntax

WORKLOAD_DISTRIBUTION=[service_domain_name(LSF lsf_distribution NON_LSF non_lsf_distribution)] ...

service_domain_name

Specify a License Scheduler service domain (described in the ServiceDomain section) that distributes the licenses.

lsf_distribution

Specify the share of licenses dedicated to LSF workloads. The share of licenses dedicated to LSF workloads is a ratio of lsf_distribution:non_lsf_distribution.

non_lsf_distribution

Specify the share of licenses dedicated to non-LSF workloads. The share of licenses dedicated to non-LSF workloads is a ratio of non_lsf_distribution:lsf_distribution.

Description

Defines the distribution given to each LSF and non-LSF workload within the specified service domain.

When running in cluster mode, WORKLOAD_DISTRIBUTION can only be specified for WAN service domains; if defined for a LAN feature, it is ignored.

Use blinfo -a to display WORKLOAD_DISTRIBUTION configuration.

Example

Begin Feature
NAME=ApplicationX
DISTRIBUTION=LicenseServer1(Lp1 1 Lp2 2)
WORKLOAD_DISTRIBUTION=LicenseServer1(LSF 8 NON_LSF 2) 
End Feature

On the LicenseServer1 domain, the available licenses are dedicated in a ratio of 8:2 for LSF and non-LSF workloads. This means that 80% of the available licenses are dedicated to the LSF workload, and 20% of the available licenses are dedicated to the non-LSF workload.

If LicenseServer1 has a total of 80 licenses, this configuration indicates that 64 licenses are dedicated to the LSF workload, and 16 licenses are dedicated to the non-LSF workload.

FeatureGroup section

Description

Collects license features into groups. Put FeatureGroup sections after Feature sections in lsf.licensescheduler.

The FeatureGroup section is supported in both project mode and cluster mode.

FeatureGroup section structure

The FeatureGroup section begins and ends with the lines Begin FeatureGroup and End FeatureGroup. Feature group definition consists of a unique name and a list of features contained in the feature group.

Example

Begin FeatureGroup
NAME = Synposys
FEATURE_LIST = ASTRO VCS_Runtime_Net Hsim Hspice
End FeatureGroup
Begin FeatureGroup
NAME = Cadence
FEATURE_LIST = Encounter NCSim  NCVerilog
End FeatureGroup

Parameters

  • NAME
  • FEATURE_LIST

NAME

Syntax

NAME=feature_group_name

Required. Defines the name of the feature group. The name must be unique.

FEATURE_LIST

Required. Lists the license features contained in the feature group.The feature names in FEATURE_LIST must already be defined in Feature sections. Feature names cannot be repeated in the FEATURE_LIST of one feature group. The FEATURE_LIST cannot be empty. Different feature groups can have the same features in their FEATURE_LIST.

ProjectGroup section

Description

Defines the hierarchical relationships of projects.

Used for project mode only. When running in cluster mode, any ProjectGroup sections are ignored.

The hierarchical groups can have multiple levels of grouping. You can configure a tree-like scheduling policy, with the leaves being the license projects that jobs can belong to. Each project group in the tree has a set of values, including shares, limits, ownership and non-shared, or exclusive, licenses.

Use blstat -G to view the hierarchical dynamic license information.

Use blinfo -G to view the hierarchical configuration.

ProjectGroup section structure

Define a section for each hierarchical group managed by License Scheduler.

The keywords GROUP, SHARES, OWNERSHIP, LIMIT, and NON_SHARED are required. The keywords PRIORITY and DESCRIPTION are optional. Empty brackets are allowed only for OWNERSHIP, LIMIT, and PRIORITY. SHARES must be specified.

Begin          ProjectGroup
GROUP          SHARES        OWNERSHIP LIMITS  NON_SHARED PRIORITY
(root(A B C))  (1 1 1)       ()        ()         ()      (3 2 -)
(A (P1 D))     (1 1)         ()        ()         ()      (3 5)
(B (P4 P5))    (1 1)         ()        ()         ()      ()
(C (P6 P7 P8)) (1 1 1)       ()        ()         ()      (8 3 0)
(D (P2 P3))    (1 1)         ()        ()         ()      (2 1)
End ProjectGroup

If desired, ProjectGroup sections can be completely independent, without any overlapping projects.

Begin ProjectGroup
GROUP               SHARES  OWNERSHIP LIMITS  NON_SHARED
(digital_sim (sim sim_reg)) (40 60)  (100 0)  ()  ()
End ProjectGroup

Begin ProjectGroup
GROUP               SHARES  OWNERSHIP LIMITS  NON_SHARED
(analog_sim (app1 multitoken app1_reg)) (50 10 40)  (65 25 0) (- 50 -) ()
End digital

Parameters

  • DESCRIPTION
  • GROUP
  • LIMITS
  • NON_SHARED
  • OWNERSHIP
  • PRIORITY
  • SHARES

DESCRIPTION

Description of the project group.

The text can include any characters, including white space. The text can be extended to multiple lines by ending the preceding line with a backslash (\). The maximum length for the text is 64 characters. When the DESCRIPTION column is not empty it should contain one entry for each project group member.

For example:

GROUP       SHARES OWNERSHIP   LIMITS NON_SHARED  DESCRIPTION
(R (A B))   (1 1)  ()          ()     (10 10)     () 
(A (p1 p2)) (1 1)  (40 60)     ()     ()          ("p1 desc." "")
(B (p3 p4)) (1 1)  ()          ()     ()          ("p3 desc." "p4 desc.")

Use blinfo -G to view hierarchical project group descriptions.

GROUP

Defines the project names in the hierarchical grouping and its relationships. Each entry specifies the name of the hierarchical group and its members.

For better readability, specify the projects in the order from the root to the leaves.

Specify the entry as follows:

(group (member ...))

LIMITS

Defines the maximum number of licenses that can be used at any one time by the hierarchical group member projects. Specify the maximum number of licenses for each member, separated by spaces, in the same order as listed in the GROUP column.

A dash (-) is equivalent to INFINIT_INT, which means there is no maximum limit and the project group can use as many licenses as possible.

You can leave the parentheses empty () if desired.

NON_SHARED

Defines the number of licenses that the hierarchical group member projects use exclusively. Specify the number of licenses for each group or project, separated by spaces, in the same order as listed in the GROUP column.

A dash (-) is equivalent to a zero, which means there are no licenses that the hierarchical group member projects use exclusively.

For hierarchical project groups in fast dispatch project mode, LSF License Scheduler ignores the NON_SHARED value configured for project groups, and only uses the NON_SHARED value for the child projects. The project group's NON_SHARED value is the sum of the NON_SHARED values of its child projects.

Normally, the total number of non-shared licenses should be less than the total number of license tokens available. License tokens may not be available to project groups if the total non-shared licenses for all groups is greater than the number of shared tokens available.

For example, feature p4_4 is configured as follows, with a total of 4 tokens:
Begin Feature
NAME =p4_4 # total token value is 4
GROUP_DISTRIBUTION=final
SERVICE_DOMAINS=LanServer
End Feature
The correct configuration is:
GROUP           SHARES    OWNERSHIP   LIMITS      NON_SHARED 
(final (G2 G1)) (1 1)     ()          ()          (2 0) 
(G1 (AP2 AP1))  (1 1)     ()          ()          (1 1)

Valid values

Any positive integer up to the LIMITS value defined for the specified hierarchical group.

If defined as greater than LIMITS, NON_SHARED is set to LIMITS.

OWNERSHIP

Defines the level of ownership of the hierarchical group member projects. Specify the ownership for each member, separated by spaces, in the same order as listed in the GROUP column.

You can only define OWNERSHIP for hierarchical group member projects, not hierarchical groups. Do not define OWNERSHIP for the top level (root) project group. Ownership of a given internal node is the sum of the ownership of all child nodes it directly governs.

A dash (-) is equivalent to a zero, which means there are no owners of the projects. You can leave the parentheses empty () if desired.

Valid values

A positive integer between the NON_SHARED and LIMITS values defined for the specified hierarchical group.

  • If defined as less than NON_SHARED, OWNERSHIP is set to NON_SHARED.
  • If defined as greater than LIMITS, OWNERSHIP is set to LIMITS.

PRIORITY

Defines the priority assigned to the hierarchical group member projects. Specify the priority for each member, separated by spaces, in the same order as listed in the GROUP column.

“0” is the lowest priority, and a higher number specifies a higher priority. This column overrides the default behavior. Instead of preempting based on the accumulated inuse usage of each project, the projects are preempted according to the specified priority from lowest to highest.

By default, priorities are evaluated top down in the project group hierarchy. The priority of a given node is first decided by the priority of the parent groups. When two nodes have the same priority, priority is determined by the accumulated inuse usage of each project at the time the priorities are evaluated. Specify LS_PREEMPT_PEER=Y in the Parameters section to enable bottom-up license token preemption in hierarchical project group configuration.

A dash (-) is equivalent to a zero, which means there is no priority for the project. You can leave the parentheses empty () if desired.

Use blinfo -G to view hierarchical project group priority information.

Priority of default project

If not explicitly configured, the default project has the priority of 0. You can override this value by explicitly configuring the default project in Projects section with the chosen priority value.

SHARES

Required. Defines the shares assigned to the hierarchical group member projects. Specify the share for each member, separated by spaces, in the same order as listed in the GROUP column.

Projects section

Description

Required for project mode only. Ignored in cluster mode. Lists the License Scheduler projects.

Projects section structure

The Projects section begins and ends with the lines Begin Projects and End Projects. The second line consists of the required column heading PROJECTS and the optional column heading PRIORITY. Subsequent lines list participating projects, one name per line.

Examples

The following example lists the projects without defining the priority:
Begin Projects 
PROJECTS 
Lp1 
Lp2 
Lp3 
Lp4 
... 
End Projects

The following example lists the projects and defines the priority of each project:

Begin Projects 
PROJECTS         PRIORITY 
Lp1              3 
Lp2              4 
Lp3              2 
Lp4              1 
default          0
... 
End Projects

Parameters

  • DESCRIPTION
  • PRIORITY
  • PROJECTS

DESCRIPTION

Description of the project.

The text can include any characters, including white space. The text can be extended to multiple lines by ending the preceding line with a backslash (\). The maximum length for the text is 64 characters.

Use blinfo -Lp to view the project description.

PRIORITY

Defines the priority for each project where “0” is the lowest priority, and the higher number specifies a higher priority. This column overrides the default behavior. Instead of preempting in order the projects are listed under PROJECTS based on the accumulated inuse usage of each project, the projects are preempted according to the specified priority from lowest to highest.

Used for project mode only.

When 2 projects have the same priority number configured, the first project listed has higher priority, like LSF queues.

Use blinfo -Lp to view project priority information.

Priority of default project

If not explicitly configured, the default project has the priority of 0. You can override this value by explicitly configuring the default project in Projects section with the chosen priority value.

PROJECTS

Defines the name of each participating project. Specify using one name per line.

Automatic time-based configuration

Variable configuration is used to automatically change License Scheduler license token distribution policy configuration based on time windows. You define automatic configuration changes in lsf.licensescheduler by using if-else constructs and time expressions in the Feature section. After you change the file, check the configuration with the bladmin ckconfig command, and restart License Scheduler in the cluster with the bladmin reconfig command.

Used for both project mode and cluster mode.

The expressions are evaluated by License Scheduler every 10 minutes based on the bld start time. When an expression evaluates true, License Scheduler dynamically changes the configuration based on the associated configuration statements, restarting bld automatically.

When LSF determines a feature has been added, removed, or changed, mbatchd no longer restarts automatically. Instead a message indicates that a change has been detected, prompting the user to restart manually with badmin mbdrestart.

This affects automatic time-based configuration in the Feature section of lsf.licensescheduler. When mbatchd detects a change in the Feature configuration, you must restart mbatchd for the change to take effect.

Example

Begin Feature
NAME = f1 
#if time(5:16:30-1:8:30 20:00-8:30)
DISTRIBUTION=Lan(P1 2/5  P2 1)
#elif time(3:8:30-3:18:30)
DISTRIBUTION=Lan(P3 1)
#else
DISTRIBUTION=Lan(P1 1 P2 2/5)
#endif
End Feature