IBM Cluster Systems Management: An installation guide

A homogeneous environment

As you all know, managing a large set of machines in an IT industry, for various activities, is very difficult. The IBM® Cluster Systems Management (CSM) tool simplifies this process. Follow along as IBM Linux™ Architect Harish Chauhan provides step-by-step instructions on how to install the CSM tool.


Harish Chauhan (, Linux Architect, IBM

Photo of Harish ChauhanHarish has been with IBM since 1998 and has 14 years of experience. During his last eight years with IBM, he has spent five years at the India Research Lab and one year at the IBM T.J.Watson Research Center. Harish has been leading the Linux Center of Competency in Bangalore, India for the past year and a half. You can contact him at

20 May 2005


A system administrator can easily manage a large cluster of machines with ease and comfort. Beginners who have very little knowledge about setting up a CSM environment can use this article as a guide. The main example considers a homogeneous environment, which means that all the machines in a cluster have the same operating system installed.

Hardware, software, and setup

Use the following hardware and software for your setup:

  • xSeries® server model: x345 (8670-IQS) with dual processors
  • IBM CSM Version
  • Red Hat Enterprise Linux (RHEL) Advanced Server Version 3.0 with update 3 or 4 / SUSE Linux Enterprise Server for POWER (SLES) 9.1
  • i386 architecture
Figure 1. Setup diagram
Setup Diagram

Follow these steps to properly install the CMS tool:

  1. Follow the installation and configuration information for your hardware and software.
  2. Set up the network connectivity among the machines, as shown in Figure 1.
  3. Set up the CSM master:
    1. Identify and designate the CSM master machine. In your setup, choose as your CSM master.
    2. Update the /etc/hosts file for the hostname/IP details.	my	# IP master	node1	# IP of 1st node	node2	# IP of 2nd node
    3. Insert the CSM CD in the DVD/CDROM drive.
      1. Extract the CSM code into the /dump/csm_dump directory.
        #tar xvf /dump/csm_dump/csm-linux-
      2. Copy the Red Hat CDs into the /dump/rereqs directory, as some Red Hat RPMs are required prerequisites for the CSM master.
      3. Download autoupdate rpms from
    4. Install the CSM core file.
      #rpm -ivh /dump/csm_dump/csm.core-

      This copies the necessary installation scripts in the /opt/csm/bin folder.

    5. Run installms, where /dump/prereqs/RedHat/RPMS contains Red Hat RPMS and /dump/csm_dump contains CSM code.
      #cd /opt/csm/bin
      #./installms -p /dump/csm_dump:/dump/prereqs/RedHat/RPMS

      The above image doesn't show the complete output.

    6. Before you can use this master, you have to activate the "Try and Buy" or the "Permanent license" option, enter:

      Note : "MaxNumNodesInDomain = 0" and "ExpDate = ".

      1. To activate the Try and Buy license option, enter:
        #/opt/csm/bin/csmconfig -L

        When you accept the license message, you are free to test the master for next 90 days.

      2. For activating the Permanent license option, enter:
        #/opt/csm/bin/csmconfig -L /dump/csm_dump/csmlum.full

        The csmlum.full license file is available on your purchased CD.

    7. To check the status of the license, enter:
      • The Try and Buy option displays the following:

        Note : "ExpDate = Fri Jul 08 05:29:59 2005" and "MaxNumNodesInDoamin = -1 (unlimited)".

      • The Permanent license options displays "BLANK" next to the ExpiryDate.
         ExpiryDate =
  4. Now CSM master is ready and you're all set to define the nodes. The nodes that are managed by this master are known as ManagedNodes.
    1. To set the PATH variable, enter:
      #export PATH=/opt/csm/bin:$PATH
    2. You can define the nodes in two ways:
      1. #definenode -n node1
        #definenode -n node2
      2. #definenode -f node_def_file

        The node_def_file contains stanzas for each node as follows:
    3. After you add the node definition on the master, you can see the defined nodes by executing:
      #lsnode -l -n node1
    4. Now its time to register the nodes.

      Make sure the /etc/hosts file is up to date on all the nodes before executing the next command.

      #updatenode -n node1
      #updatenode -n node2

      Or, try the following:

      #updatenode -a  [All the nodes]
    5. The Monitorinstall command shows you the status of all the nodes.
         Node		Mode		Status
         node1	Managed		Installed
         node2	PreManaged	Not Installed
  5. Make sure your CSM master and nodes are successfully installed and configured.

    1. To test, enter:
      #export DSH_LIST=/etc/node_list

      The node_list contains a shortname of your nodes, one hostname per line.

    2. To display the date output from both nodes, enter:
      #dsh date
    3. To show the status of all the nodes, whether they are "alive" or not, enter:
      #lsnode -p
      node1: 1 (Alive)
      node2: 1 (Alive)
  6. The Distributed Command Execution Manager (DCEM) GUI is included in the CSM for management of nodes.
  7. You need to make a back up of the CSM and ERRM after every major change in the setup.
    1. Use the backup command to perform the backup:

      This command will back up the CSM master in the /var/opt/csm/csmdata directory.
    2. To back up other files where <my_files> are /etc/dhcpd.conf, /etc/hosts, and so forth, enter:
      #csmbackup  -f  <my_files>
    3. To back up the node data, enter:
      #lsnode  -F  > /csmbackup/
      #nodegrp -L  > /csmbackup/
    4. To backup ERRM data, enter:
      #lsrsrc -i IBM.Condition > / csmbackup/
      #lsrsrc -i IBM.Response > / csmbackup/
      #lscondresp  -lx > /csmbackup/

      You should always back up the following directories:

      • /csminstall
      • /etc/opt/csm
      • /var/opt/csm
  8. Perform the following steps to restore.
    1. To restore, enter:

      The hostname and operating system on the master shouldn't change while performing the backup.

    2. To restore node data, enter:
      #definenode -f  /csmbackup/
      #nodegrp -f  /csmbackup/

      IBM.ConditionI and IBM.Response should be deleted before restoring the ERRM data.

    3. To restore ERRM data, enter:
      #mkrsrc -f /csmbackup/  IBM.Condition
      #mkrsrc -f /csmbackup/  IBM.Response



developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Linux on developerWorks

ArticleTitle=IBM Cluster Systems Management: An installation guide