Adding an IBM Cloud Private cluster node

Add worker, proxy, management, vulnerability advisor, and custom host group nodes to your IBM® Cloud Private cluster.

Preparing the new node for installation

Complete the following steps on the new node:

  1. Ensure that all default ports are open but are not in use. No firewall rules must block these ports. During installation the installer also confirms that these ports are open. For more information about the IBM Cloud Private default ports, see Default ports.

    To manually check whether a port is open and available, you can run one of the following two commands, where port_numbers represent the TCP/UDP port or ports to check:

    • Run the ss command:

       ss -tnlp | awk '{print $4}'| egrep -w "<port_numbers>"
      

      If the port is not in use, the output is empty. If the port is in use, the output displays as it does in the following example:

       # ss -tnlp | awk '{print $4}' | egrep -w "8001|8500|3306"
       :::8001
       :::3306
       :::8500
      
    • Or, if you installed network utilities, run the netstat command:

       netstat -tnlp | awk '{print $4}' | egrep -w "<port_numbers>"
      

      If the port is in use, the output displays as it does in the following example:

       # netstat -tnlp | awk '{print $4}' | egrep -w "8001|8500|3306"
       :::8001
       :::3306
       :::8500
      

      Port numbers must be separated with the | character. See the following example:

      netstat -tnlp | awk '{print $4}'| egrep -w 8101|8500|3306|
      
  2. Configure the DNS utilities to make sure the host name and fully qualified domain name (FQDN) of the newly added node are resolvable within the cluster.

  3. Ensure network connectivity between the new node and all other nodes in your cluster.
  4. Synchronize the clock on the new node to the rest of the cluster nodes. To synchronize your clocks, you can use network time protocol (NTP). For more information about setting up NTP, see the user documentation for your operating system.
  5. On the new node, confirm that a supported version of Python is installed. Python versions 2.6 to 2.9.x and 3.5 or later are supported.

      python --version
    
  6. Ensure that an SSH client is installed on the new node.

  7. If you use SSH public key authentication to create the secure connection between your cluster nodes, add the SSH public key to the new node. From the boot node, add the SSH public key to the node by running the following command:

       ssh-copy-id -i ~/.ssh/id_rsa.pub <user>@<node_ip_address>
    

    Where <user> is the user name for the node, and <node_ip_address> is the IP address of that node.

  8. If you manually install Docker on your non-boot nodes, install Docker on the new node. See Installing Docker on IBM Cloud Private.

Preparing a new arch node for installation

If the new node is an arch node, complete the following steps on the boot node before you add the arch node to your cluster. These steps are needed in addition to the preceding steps for preparing a new node. You do not need to complete these additional steps when you add an existing arch node.

For instance, if your cluster includes an amd64 arch node and you want to add a ppc641e arch node, you need to complete these steps to first prepare the new arch node.

  1. From the boot or master node, copy the offline package of the new arch node to the /<installation_directory>/cluster/images directory.
  2. Change the cluster directory of the boot node.
  3. Push the new arch node images and build the multi-arch images by running the following command:

    docker run -e LICENSE=accept --net=host \
    -v "$(pwd)":/installer/cluster \
    ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.0-ee multi-arch-image
    
  4. Add the worker node to your cluster by running the following command:

    docker run -e LICENSE=accept --net=host  -v "$(pwd)":/installer/cluster ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.1-ee worker -l <IP-AddressOfWorkereNode> -vv
    

Your new arch node is prepared for installation.

Adding nodes

Complete the following steps from the boot node that was used to install your cluster.

  1. Change to the cluster directory within your IBM Cloud Private installation directory.

    cd /<installation_directory>/cluster
    
  2. Ensure that the installer for the platform that the new node runs on, is available in your /<installation_directory>/cluster/images directory.

    • For a Linux® node, you need the ibm-cloud-private-x86_64-3.2.0.tar.gz or ibm-cp-app-mod-x86_64-3.2.0.tar.gz file.
    • For a Linux® on Power® (ppc64le) node, you need the ibm-cloud-private-ppc64le-3.2.0.tar.gz or ibm-cp-app-mod-ppc64le-3.2.0.tar.gz or file.
    • For a IBM® Z worker node, you need the ibm-cloud-private-s390x-3.2.0.tar.gz file.
  3. Add the new node.

Adding a worker node

Note: To add a IBM Z node to your cluster, add the IP address for the Z worker node to the /<installation_directory>/hosts file.

To add a worker node, run the appropriate command based on the working nodes to be added:

Adding a management node

To add a management node, run the following command:

  docker run -e LICENSE=accept --net=host \
  -v "$(pwd)":/installer/cluster \
  ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.0-ee management -l \
  ip_address_managementnode1,ip_address_managementnode2

In this command, ip_address_managementnode1 and ip_address_managementnode2 are IP addresses of new management nodes. When you run this command, the IP addresses that you specify are added to the hosts file.

Adding a proxy node

This procedure is supported in proxy HA environments only.

To add a proxy node, run the following command:

  docker run -e LICENSE=accept --net=host \
  -v "$(pwd)":/installer/cluster \
  ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.0-ee proxy -l \
  ip_address_proxynode1,ip_address_proxynode2

In this command, ip_address_proxynode1 and ip_address_proxynode2 are IP addresses of new proxy nodes. When you run this command, the IP addresses that you specify are added to the hosts file.

Adding a vulnerability advisor node

Important: IBM Cloud Private, supports adding only a single VA node when the VA is enabled post installation. For more information, see IBM Cloud Private supports a single VA node when added during post installation.

To add a VA node, run the following command:

  docker run -t -e LICENSE=accept --net=host \ -v $(pwd):/installer/cluster \ ibmcom/icp-inception-$(uname -m | **sed 's/x86_64/amd64/g'**):3.2.0-ee va \ -l ip_address_vanode

In this command, ip_address_vanode is the IP address of your new VA node. When you run this command, the IP address that you specify is added to the hosts file.

Note: The Vulnerability Advisor service is not enabled on the node after you add it to your cluster. For more information, see Enabling and disabling IBM Cloud Private management services.

Adding a host group

Host group nodes are a set of worker nodes that are reserved for running specific applications or processes.

Note: You cannot change the host group for existing worker nodes to other roles.

  1. Ensure that the host group is defined in the host file. For more information, see Setting the node roles in the hosts file.

  2. To set up a host group, run the following command:

     docker run -e LICENSE=accept --net=host \
     -v "$(pwd)":/installer/cluster \
     ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.0-ee hostgroup -l [hostgroup-name]
    

    Note: If you want to install multiple host groups in a single command, omit the -l option.

  3. To add a specific host to a host group for new worker nodes, run the following command::

     docker run -e LICENSE=accept --net=host \
     -v "$(pwd)":/installer/cluster \
     ibmcom/icp-inception-$(uname -m | sed 's/x86_64/amd64/g'):3.2.0-ee hostgroup -l \
     ip_address_hostgroupnode1,ip_address_hostgroupnode2
    

    In this command, ip_address_hostgroupnode1 and ip_address_hostgroupnode2 are IP addresses of the new host group nodes.