Adding a host

This topic provides information to add a new IBM Spectrum® Scale node. The IBM Spectrum Scale node can be an IBM Spectrum Scale client, HDFS Transparency NameNode or DataNode.

About this task

See Preparing the environment section to prepare the new nodes.

Note:
  • Ensure that the IBM Spectrum Scale service is in integrated state before adding the node.
  • On the new host being added, create a userid and groupid called anonymous with the same value as all the other GPFS nodes. For more information, see Create the anonymous user id.
  • If you are adding new nodes to an existing cluster, and if the nodes being added already have IBM Spectrum Scale installed on them, ensure that the new nodes are at the same version of IBM Spectrum Scale as the existing cluster. Do not mix GPFS™ Nodes with different versions of IBM Storage® Scale software in a GPFS cluster.

    If you are adding a new node to an existing cluster with inconsistent IBM Spectrum Scale versions, the new node will not install even if the failed installed node might still be displayed in the cluster list in Ambari. To delete the failed node from the cluster in Ambari, see Deleting a host.

    The new nodes can then be added to the Ambari cluster by using the Ambari web interface.

    For more information, see Adding GPFS node component.

  • If the IBM Spectrum Scale cluster is configured in admin mode central mode, following steps need to be performed as prerequisite:
    1. On the node to be added, execute:
      1. Install all the GPFS packages.
      2. Build the GPL layer using: /usr/lpp/mmfs/bin/mmbuildgpl.
    2. On the admin node (usually the ambari server node) execute:
      1. /usr/lpp/mmfs/bin/mmaddnode -N <FQDN-of-new-node>
      2. /usr/lpp/mmfs/bin/mmchlicense server --accept -N <FQDN-of-new-node>
      3. /usr/lpp/mmfs/bin/mmmount all
      4. /usr/lpp/mmfs/bin/mmlsmount all
  • If the host that is to be added already has HDFS Transparency installed and configured, and you want to add this host to an existing HDFS Transparency cluster through Ambari, you need to erase the HDFS Transparency packages and configuration files on the host that is to be added. This is to ensure that the GPFS_Node component install step does not fail because of the stale configuration information.
    Perform the following steps for cleaning up stale configuration on the host to be added:
    1. Uninstall the existing HDFS Transparency package by running the following command:
      # yum erase gpfs.hdfs-protocol
    2. Remove all the HDFS Transparency configuration XML files under the /var/mmfs/hadoop/etc/hadoop/ directory.

The new nodes can then be added to the Ambari cluster by using the Ambari web interface.

Procedure

  1. On the Ambari dashboard, click Hosts > Actions > Add New Hosts.

    Add new hosts
  2. Specify the new node information, and click Registration and Confirm.
    Note:
    • The SSH Private Key is the key of the user on the Ambari Server.
    • If the warning is due to user id already existing and these are the user ids that were predefined for the cluster, then the warning can be ignored. Otherwise, if there are other host check failures, then check for the failure by clicking on the link and follow the directions in the pop up window.

      Registration and confirm
  3. Select the services that you want to install on the new node.
    Note:
    • If HDFS Transparency DataNode is needed on a host, select DataNode, NodeManager, and GPFS Node components for that host.
    • If you want only the IBM Spectrum Scale client and not the HDFS Transparency components on a host, select only the GPFS Node component.

      Assign slaves and clients

      Assign slaves and clients

      For more information, see Adding GPFS node component.

  4. If several configuration groups are created, select one of them for the new node.

    Add host wizard
  5. Review the information and start the deployment by clicking Deploy.

    Review
  6. Install, Start and Test panel.

    Install, start and test
  7. After the Install, Start and Test wizard finishes, click Complete.

    Install, start and test

    Install, start and test
  8. A new node is added to the Ambari cluster.
    From Hosts dashboard, the new node is added to the host list.

    Hosts
  9. For any service with the restart required icon, go to the service dashboard, select Restart > Restart All Affected.
    Note: Ambari does not create NSDs on the new nodes. To create IBM Storage Scale NSDs and add NSDs to the file system, follow the steps under the Adding disks to a file system topic in the IBM Storage Scale: Administration Guide.
  10. Restart HDFS service in Ambari.
    Check the cluster information.
    Note: In case of FPO, Ambari does not create NSDs on the new nodes. To create IBM Spectrum Scale NSDs and add NSDs to the file system, follow the steps under the Adding disks to a file system topic in the IBM Storage Scale: Administration Guide.
    Check the cluster information.
    [root@c902f05x01 ~]# /usr/lpp/mmfs/bin/mmlscluster 
     
    GPFS cluster information 
    ======================== 
      GPFS cluster name:         bigpfs.gpfs.net 
     
      GPFS cluster id:           8678991139790049774 
     
      GPFS UID domain:           bigpfs.gpfs.net 
      Remote shell command:      /usr/bin/ssh 
      Remote file copy command:  /usr/bin/scp 
      Repository type:           CCR 
     
    Node  Daemon node name     IP address   Admin node name      Designation 
    -------------------------------------------------------------------------- 
       1   c902f05x01.gpfs.net  192.0.2.11  c902f05x01.gpfs.net  quorum 
       2   c902f05x04.gpfs.net  192.0.2.17  c902f05x04.gpfs.net  quorum 
       3   c902f05x03.gpfs.net  192.0.2.15  c902f05x03.gpfs.net  quorum 
       4   c902f05x02.gpfs.net  192.0.2.13  c902f05x02.gpfs.net 
       5   c902f05x05.gpfs.net  192.0.2.19  c902f05x05.gpfs.net 
     
    [root@c902f05x01 ~]# 
     
    [root@c902f05x01 ~]# /usr/lpp/mmfs/bin/mmgetstate -a 
     
    Node number  Node name        GPFS state 
    ------------------------------------------ 
           1      c902f05x01       active 
           2      c902f05x04       active 
           3      c902f05x03       active 
           4      c902f05x02       active 
           5      c902f05x05       active 
     
    [root@c902f05x01 ~]# 
     
    [root@c902f05x01 ~]# /usr/lpp/mmfs/bin/mmlsnsd 
     
    File system   Disk name    NSD servers 
    --------------------------------------------------------------------------- 
    bigpfs        gpfs1nsd     c902f05x01.gpfs.net 
    bigpfs        gpfs2nsd     c902f05x02.gpfs.net 
    bigpfs        gpfs3nsd     c902f05x03.gpfs.net 
    bigpfs        gpfs4nsd     c902f05x04.gpfs.net 
    bigpfs        gpfs5nsd     c902f05x03.gpfs.net 
    bigpfs        gpfs6nsd     c902f05x02.gpfs.net 
    bigpfs        gpfs7nsd     c902f05x01.gpfs.net 
    bigpfs        gpfs8nsd     c902f05x04.gpfs.net 
    bigpfs        gpfs9nsd     c902f05x02.gpfs.net 
    bigpfs        gpfs10nsd    c902f05x03.gpfs.net 
    bigpfs        gpfs11nsd    c902f05x04.gpfs.net 
    bigpfs        gpfs12nsd    c902f05x01.gpfs.net 
    bigpfs        gpfs13nsd    c902f05x02.gpfs.net 
    bigpfs        gpfs14nsd    c902f05x03.gpfs.net 
    bigpfs        gpfs15nsd    c902f05x04.gpfs.net 
    bigpfs        gpfs16nsd    c902f05x01.gpfs.net 
     
    [root@c902f05x01 ~]#  
     
    [root@c902f05x05 ~]# mount | grep bigpfs 
    bigpfs on /bigpfs type gpfs (rw,relatime) 
    [root@c902f05x05 ~]#  
     
    [root@c902f05x01 ~]# /usr/lpp/mmfs/hadoop/sbin/mmhadoopctl connector getstate 
     
    c902f05x01.gpfs.net: namenode running as process 17599. 
    c902f05x01.gpfs.net: datanode running as process 21978. 
    c902f05x05.gpfs.net: datanode running as process 5869. 
    c902f05x04.gpfs.net: datanode running as process 25002. 
    c902f05x03.gpfs.net: datanode running as process 10908. 
    c902f05x02.gpfs.net: datanode running as process 6264. 
    [root@c902f05x01 ~]#