Resource management

The following new feature affects resource management and allocation.

What's new in resource connector for IBM Spectrum LSF

Extended AWS support

This feature extends LSF the resource connector AWS template to specify an Amazon EBS-Optimized instance. The AWS template also supports LSF exclusive resource syntax (!resource) in the instance attributes. LSF considers demand on the template only if a job explicitly asks for the resource in its combined resource requirement.

Launch Google Compute Cloud instances

LSF clusters can launch instances from Google Compute Cloud to satisfy pending workload. The instances join the LSF cluster. If instances become idle, LSF resource connector automatically deletes them. Configure Google Compute Cloud as a resource provider with the googleprov_config.json and googleprov_templates.json files.

bhosts -rc and the bhosts -rconly commands show extra host information about provider hosts

Use the bhosts -rc and the bhosts -rconly command to see information about resources that are provisioned by LSF resource connector.

The -rc and -rconly options make use of the third-party mosquitto message queue application to support the additional information displayed by these bhosts options. The mosquitto binary file is included as part of the LSF distribution. To use the mosquitto daemon that is supplied with LSF, you must configure the LSF_MQ_BROKER_HOSTS parameter in the lsf.conf file to enable LIM to start the mosquitto daemon and for ebrokerd to send resource provider information to the MQTT message broker.

What's new in data manager for IBM Spectrum LSF

Enhanced LSF multicluster job forwarding

This feature enhances the LSF data manager implementation for the hybrid cloud environment using job forwarding with IBM Spectrum LSF multicluster capability (LSF multicluster capability). In this implementation, the cluster running in the public cloud is used as the execution cluster, and this feature enables the submission cluster to push the forwarding job’s data requirement to the execution cluster and to receive the output back from the forwarding job. To enable this feature, specify the SNDJOBS_TO parameter in the lsb.queues file for the data transfer queue in the execution cluster, and specify the RCVJOBS_FROM parameter in the lsb.queues file for the submission cluster. The path of the FILE_TRANSFER_CMD parameter in the lsf.datamanager file for the data manager host must exist in the submission cluster.

Specify a folder as the data requirement

When you specify a folder as a data requirement for a job, LSF generates a single signature for the folder as a whole, and only a single transfer job is required. You can also now use symbolically linked files in a job data requirement, and the colon (:) character can now be used in the path of a job data requirement.

When you submit a job with a data requirement, a data requirement that ends in a slash and an asterisk (/*) is interpreted as a folder. Only files at the top-level of the folder are staged. For example,
bsub -data "[host_name:]abs_folder_path/*" job

When you use the asterisk character (*) at the end of the path, the data requirements string must be in quotation marks.

A data requirement that ends in a slash (/) is also interpreted also as a folder, but all files including subfolders are staged. For example,
bsub -data "[host_name:]abs_folder_path/" job

To specify a folder a data requirement for a job, you must have access to the folder and its contents. You must have read and execute permission on folders, and read permission on regular files. If you don’t have access to the folder, the submission is rejected.