IBM Spectrum LSF Suite for Enterprise or Enterprise Plus installation overview
Learn about the IBM Spectrum LSF Suite for Enterprise and Enterprise Plus installation process.
- Defines the roles of each server in the cluster
- Creates an installation repository
- Validates that the servers in the cluster meet the installation requirements
- Deploys the appropriate software stack on each server in the cluster
- Builds the initial configuration files for the cluster
- Make sure your hosts satisfy the host, operating system requirements, network configuration, hardware, and other prerequisites that are described in Installation requirements.
- Download and extract the installation packages.
- Change directory to /opt/ibm/lsf_installer/playbook
- Edit the lsf-inventory file to list the hosts in the cluster and their roles.
- Edit the lsf-config.yml file to specify a cluster name, a shared directory, a JDBC connection string, and other cluster properties.
- Test the configuration and host access by running the commands ansible-playbook -i lsf-inventory lsf-config-test.yml and ansible-playbook -i lsf-inventory lsf-predeploy-test.yml.
- Run the command ansible-playbook -i lsf-inventory lsf-deploy.yml.

Packages for IBM Spectrum LSF Suite for Enterprise and Enterprise Plus
- Contains the RPM files, Ansible Playbooks, license and any other needed files into a single file for easy download.
- After you accept the IBM Spectrum LSF Suite for
Enterprise license, the .bin file does the following:
- Extracts the Ansible playbooks
- Extracts the initial Ansible inventory file, and cluster configuration file
- Tests for Apache, and installs it from the operating system repository if necessary
- Extracts the RPM files into an Apache-hosted directory
- Test for the createrepo command, and installs it from the operating system repository if necessary
- Runs the createrepo command in the directory that contains the RPM files
- Displays information about each step of the installation
- 64-bit x86 Linux systems
-
- lsfsent10.2.0.0-x86_64.bin
- lsfsentplus10.2.0.0-x86_64.bin
- IBM Power Linux (Little Endian) systems
-
- lsfsent10.2.0.0-ppc64le.bin
- lsfsentplus10.2.0.0-ppc64le.bin
For mixed IBM Power and x86 environments, download both packages.
Check host prerequisites
- Set up the OS repository for the hosts to retrieve OS packages from. During the installation it will be necessary to install some packages that are provided in the OS media. If your machines are registered with Red Hat Network, the OS repository is set up. If not, you must prepare the OS repository manually for installing software prerequisites.
- Configure password-less SSH. The host where you downloaded the .bin file is installed as the deployment host. Set up password-less SSH for root from this host to all other hosts in the cluster.
- Decide on local disk or shared directory installation.
For shared directory installation, use NFS or IBM Spectrum Scale to mount the shared directory.
High-availability configuration applies to both local and shared directory installations.- Use the HA_shared_dir option in the lsf-config.yml file for local installation.
- Use the NFS_install_dir option in the lsf-config.yml file for shared directory installation.
You can install with no shared directory at all (HA_shared_dir: none and NFS_install_dir: none).
- (Optional) Prepare external database. Use an external database for greater reliability. The SQL code that is needed to initialize the database is in the /opt/ibm/lsf_installer/DBschema directory on the deployment host.
- (Optional) Mount a reliable shared directory on all hosts in the cluster. Do not mount this directory under any directory in /opt/ibm. The shared directory can be used later in the installation for High Availability for the LSF cluster and for holding the binary files and configuration for the servers and clients.
- Use the blah command.
Run the downloaded .bin packages
Copy the downloaded package to the deployment host. The deployment host must be able log in with password-less SSH to all hosts in the cluster. Run the .bin file on your deployment host.
# ./lsfsent10.2.0.0-ppc64le.bin
# ./lsfsent10.2.0.0-x86_64.bin
If you want to deploy on both IBM Power hosts and x86 hosts, run both files.
- Deposits the RPMs in the /var/www/html/lsf_suite_pkgs/<architecture> repository directory on the deployment host , where architecture is x86_64 or ppc64le.
- Sets up the host as the cluster deployment host and creates the package repository for IBM Spectrum LSF Suite for Enterprise or Enterprise Plus RPMs.
- Sets up the deployment host as a simple web server with the default settings (port number 80 and document root /var/www/html).
- Extracts Ansible playbooks and puts them in the /opt/ibm/lsf_installer directory.
Decide host roles for your cluster
- How many LSF management hosts? Two or three hosts are recommended.
- Where to run the database? Within the cluster? On an external host?
- How many GUI hosts? Typically GUI hosts are co-located with an LSF candidate.
- (Starting in Fix
Pack 15) How many LSF Web Services hosts?
Typically LSF Web Services hosts
are co-located.
with an LSF candidate.
- Enable system monitoring?
- Where is the shared directory for HA?
Configure these decisions in the inventory file for the installer (lsf-inventory).
- lsf-inventory file
-
The role of each host is controlled by the lsf-inventory file.
A single host can have multiple roles. The LSF_Masters role is a superset of the LSF_Servers role, which is a superset of the LSF_Clients role, so list a host only once in one of these roles.
Edit the lsf-inventory file, and set the roles of each host in the cluster.
- lsf-config.yml file
-
By default, the lsf-config.yml file defines the following cluster properties:
- Name of cluster (my_cluster_name). Set this name once, and do not change it after you install the cluster.
- System monitoring (Enable_Monitoring, the default is True).
- (Optional) Shared directory for High Availability (HA_shared_dir). Can be
configured after installation.
If you set a directory name, the installation copies the configuration files and work directory contents to the specified directory, and updates the configuration to point to it. Set the HA_shared_dir option to none if you do not use an HA shared directory, or if you use an NFS-shared directory that is defined with the NFS_install_dir option. The NFS_install_dir option specifies a shared directory that contains the LSF server and client binary files, man pages, and configuration files. When you define NFS_install_dir, the directory is also used for HA, so set the HA_shared_dir option to none. For more information, see Installing IBM Spectrum LSF Suite for Enterprise on a shared file system.
- (Optional) External database connection information (JDBC_string). The JDBC_string option is the connection string for an optional external database. If a host has the DB_Host role the lsf-inventory file, the value of JDBC_string is internally constructed based on the value of the DB_Host option. If you do not specify a host in the DB_host role, you must define a JDBC_string.
- (Optional) Secondary LSF administrator users (Secondary_LSF_ADMINS). LSF administrators have LSF and to control batch jobs that are submitted by other users. Secondary administrators typically do not have permission to start LSF daemons. Usually, only root has permission to start LSF daemons. All secondary LSF administrator accounts must exist on all hosts in the cluster before you install LSF.
- HTTP mode for LSF Web Services (HTTP_MODE). Can be configured after installation.
- HTTP port for LSF Web Services (HTTP_PORT).
- HTTPS port for LSF Web Services (HTTPS_PORT).
- Hosts that will be configured in the SSL certificate if HTTP mode is configured as HTTPS (SSL_VALID_HOSTS).
- LSF Web Services cluster name when high available is enabled (LWS_CLUSTER_NAME).
- LSF Web Services shared configuration directory (SHARED_CONFIGURATION_DIR) . This option specifies a shared directory that contains the LSF Web Services configuration files and runtime files, and is only required when LSF Web Services high available is enabled, and the /opt/ibm/lsfsuite directory is not a shared directory.
Server host roles
Servers can have several roles in small, medium, and large clusters. Server host roles control the functions that are provided by a host in the cluster.
- LSF_Masters
- A list of the LSF host and LSF candidate hosts, one host or host name expression per line. The LSF host runs the processes that are needed by LSF to manage the workload on the cluster. You can configure multiple host candidates for fault tolerance and automatic host failover. For more information, see Fault tolerance and automatic host failover.
- LSF_Servers
- A list of LSF server hosts run the workload in the cluster, one host or host name expression per line. These servers are the execution hosts for the LSF jobs. They do not need access to a shared directory for the LSF working and configuration directories. You can use an expression to represent a number of hosts. For example, host[1:100] configures host1, host2, host3, ... host100, and host[a:f] is equivalent to listing hosta, hostb, hostc, hostd, and hostf.
- (Optional) LSF_Clients
- A list of hosts are used only for job submission, one host or host name expression per line. They are also referred to as login nodes. These hosts cannot run work. Users log in to LSF client hosts to submit work from to LSF servers and run commands to interact with LSF. For more information about LSF server and client hosts, see Hosts.
- GUI_Hosts
- A list of GUI hosts, one host per line, that run the IBM Spectrum LSF Suite for Enterprise or Enterprise Plus portal GUI and supporting services for users to submit and monitor their workload. At least one host must be a GUI server. If no HA_shared_dir or no NFS_install_dir is specified, the GUI host must be set to the LSF_Masters host. Use a public (external) host name if the host has multiple NICs. Ensure that the host can be reached by the ping command by both its public IP address and the host name that is reported by the hostname command.
- LSF_WebService
- A list of LSF Web Services hosts, one host per line, that run the IBM Spectrum LSF Suite for Enterprise or Enterprise Plus web services for users to submit and monitor their workload. At least one host must be an LSF Web Services host. The LSF Web Services host must be set to the LSF_Masters host or LSF_Servers host. Use a public (external) host name if the host has multiple NICs. Ensure that the host can be reached by the ping command by both its public IP address and the host name that is reported by the hostname command.
- (Optional) DB_Host
- The database host installs and runs the IBM Spectrum LSF Suite for Enterprise or Enterprise Plus database. By default, MariaDB is installed. An Ubuntu host cannot be defined as a database host. To configure an Ubuntu host as a database host role, define a JDBC_string parameter in the lsf-config.yml file after MariaDB is installed and initialized with SQL files in the /opt/ibm/lsf_installer/DBschema/MySQL/ directory. For more information, see Installing IBM Spectrum LSF Suite for Enterprise or Enterprise Plus on Ubuntu hosts.
- Deployer
- The deployer host houses the IBM Spectrum LSF Suite for Enterprise or Enterprise Plus installation repository that contains the RPM packages and the Ansible playbooks for deploying the cluster. The deployment host does not have to be part of the cluster.
Example servers and role configurations

In this case, everything (deployment, LSF host, GUI server host, LSF Web Services host, database host) is installed on one host. This single-host installation is not a production configuration, but it is suitable to test IBM Spectrum LSF Suite for Enterprise or Enterprise Plus.

This cluster is configured with a host that also serves as the LSF Suite deployment host, and a second host that serves as a failover candidate host, GUI server, LSF Web Services host, and database host. A shared directory that contains the LSF work and configuration directories is mounted on both hosts. One hundred LSF servers are configured to run the workload.

This installation configures three candidate hosts. The deployment host is one of the candidates. Two GUI hosts and two LSF Web Services hosts dare configured on the other two candidates. A shared directory is configured for failover with data that is replicated between instances. One thousand LSF servers are configured to run the jobs.

This cluster installs three candidate hosts, several GUI server hosts, and several LSF Web Services hosts which are not defined from LSF hosts. There is a separate database host (a database host is external to the cluster for high availability). A separate deployment host is configured along with several LSF server hosts and LSF client hosts.
Check the configuration and installation environment
- Check the configuration file, and correct any
errors:
ansible-playbook -i lsf-inventory lsf-config-test.yml
- Run the pre-deployment
test.
ansible-playbook -i lsf-inventory lsf-predeploy-test.yml
This test runs on each hosts to check network connectivity and host name resolution, minimum disk space, and available memory. The test takes a few minutes to run.