Preparing nodes

Before you install IBM® Cloud Private, you must configure a cluster of server nodes.

Prepare your cluster for installation

Before you prepare your nodes to install IBM Cloud Private, you must make some decisions about the cluster.

  1. Review the system requirements. For more information about software and hardware requirements, see System requirements.
  2. Determine your cluster architecture, and obtain the IP address for all nodes in your cluster. For more information about node types, see Architecture. During installation, you specify the IP addresses for each node type. Remember that after you install IBM Cloud Private, you can add or remove worker, proxy, or management nodes from your cluster.
  3. Your environment can include nodes with different network device names such as enX for RHEL, or netX for Ubuntu. For Calico configurations, be sure to use interface=<REGEX> in the network settings as shown in the following example.

    interface="en.*, net.*, eth.*"
    

    Note: You cannot change Calico network settings after your initial setup without experiencing network disruption. You must plan the settings before the initial setup of your IBM Cloud Private cluster.

Prepare each node for installation

  1. Ensure that all required ports are open but are not in use. No firewall rules must block these ports. During installation, the installer also confirms that these ports are open. For more information about the IBM Cloud Private required ports, see Required ports.

    To manually check whether a port is open and available, you can run one of the following two commands, where port_numbers represent the TCP/UDP port or ports to check:

    • Run the ss command:

       ss -tnlp | awk '{print $4}'| egrep -w "<port_numbers>"
      

      If the port is not in use, the output is empty. If the port is in use, the output displays as it does in the following example:

       # ss -tnlp | awk '{print $4}' | egrep -w "8001|8500|3306"
       :::8001
       :::3306
       :::8500
      
    • Or, if you installed network utilities, run the netstat command:

       netstat -tnlp | awk '{print $4}' | egrep -w "<port_numbers>"
      

      If the port is in use, the output displays as it does in the following example:

       # netstat -tnlp | awk '{print $4}' | egrep -w "8001|8500|3306"
       :::8001
       :::3306
       :::8500
      

      Port numbers must be separated with the | character.

  2. Ensure that all directories on the node are empty and have space for the installation. For more information, see Hardware requirements and recommendations.

  3. Configure the /etc/hosts file on each node in your cluster.

    1. Add the IP addresses and host names for all nodes to the /etc/hosts file on each node.
      • Important: Ensure that the host name is listed by the IP address for the local host. You cannot list the host name by the loopback address, 127.0.0.1.
      • As a technology preview, if fully qualified domain names (FQDN) are used to identify nodes in a non-HA cluster directory's hosts file, add the FQDNs for all nodes along with the IP addresses and host names.
      • Host names in the /etc/hosts file cannot contain uppercase letters.
      • If your cluster contains a single node, you must list its IP address and host name.
    2. Comment out the line of the file that begins with 127.0.1.1 and ::1 localhost.

      The /etc/hosts file for a cluster that contains a master node, a proxy node, and two worker nodes resembles the following code:

       127.0.0.1       localhost
       # 127.0.1.1     <host_name>
       # The following lines are desirable for IPv6 capable hosts
       #::1     localhost ip6-localhost ip6-loopback
       ff02::1 ip6-allnodes
       ff02::2 ip6-allrouters
       <master_node_IP_address> <master_node_host_name>
       <worker_node_1_IP_address> <worker_node_1_host_name>
       <worker_node_2_IP_address> <worker_node_2_IP_host_name>
       <proxy_node_IP_address> <proxy_node_host_name>
      

      Note: While the IBM Cloud Private installation process is running, the /etc/hosts file on all of the cluster nodes will be automatically updated to include an entry for clusterName.icp. This correlates to cluster_vip, unless cluster_vip is not set, in which case it correlates to cluster_lb_address. IBM Cloud Private uses the clustername.icp:8500/xx/xxx in the /etc/hosts file to pull the IBM Cloud Private registry Docker images.

  4. On each cluster node, you must configure either a default gateway or a route to the service_cluster_ip_range.

    For example, if you want to configure a route to the default IPv4 service_cluster_ip_range, run the following command:

     ip route add 10.0.0.0/16 dev eth0
    

    Where eth0 is the Ethernet interface that is assigned to your public IP address. For more information about service_cluster_ip_range, see Network settings.

  5. For OpenStack environments, if the /etc/hosts is managed by the cloud-init service, you need to prevent the cloud-init service from modifying the /etc/hosts file. In the /etc/cloud/cloud.cfg file, ensure that the manage_etc_hosts parameter is set to false:

       manage_etc_hosts: false
    
  6. Ensure network connectivity between all nodes in your cluster. Confirm that each node is connected to all other nodes in the cluster.

  7. On each node in your cluster, confirm that a supported version of Python is installed. Python 2 (versions 2.6 or 2.7) and Python 3 (version 3.5 or later) are supported.

    • To check whether a version of Python 2 is installed:

      python2 --version
      
    • To check whether Python 2.7 is installed:

      python2.7 --version
      
    • To check whether a version of Python 3 is installed:

      python3 --version
      

      Note: If Python 3 or later is used, the location of the Python interpreter must be set in the config.yaml file by inserting the following line:

      ansible_python_interpreter: /usr/bin/python3
      
  8. Synchronize the clocks in each node in the cluster. To synchronize your clocks, you can use network time protocol (NTP). For more information about setting up NTP, see the user documentation for your operating system.

  9. Ensure that an SSH client is installed on each node.

What to do next

Install your cluster, see Installing IBM Cloud Private.