Installing IBM® Cloud Private either Cloud Native or Enterprise

You can install a standard or high availability (HA) cluster for either your IBM Cloud Private Cloud Native or Enterprise bundle.

Installation of an IBM Cloud Private can be completed in five main steps:

  1. Install Docker for your boot node only
  2. Set up the installation environment
  3. (Optional) Customize your cluster
  4. Set up Docker for your cluster nodes
  5. Deploy the environment
  6. Verify the installation

When the installation completes, access your cluster and complete postinstallation tasks.

Note: If you encounter errors during installation, see Troubleshooting install.

Step 1: Install Docker for your boot node only

The boot node is the node that is used for installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node.

You need a version of Docker that is supported by IBM Cloud Private installed on your boot node. See Supported Docker versions.

To install Docker, see Manually installing Docker.

Step 2: Set up the installation environment

  1. Log in to the boot node as a user with root permissions.
  2. Download the installation files for IBM Cloud Private. You must download the correct file or files for the type of nodes in your cluster. You can obtain these files from the IBM Passport Advantage® Opens in a new tab website.
    • For a Linux® x86_64 cluster, download the ibm-cloud-private-x86_64-3.1.0.tar.gz file.
    • For a Linux® on Power® (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.1.0.tar.gz file.
    • For a cluster that uses IBM® Z worker and proxy nodes, download the ibm-cloud-private-s390x-3.1.0.tar.gz file.
  3. Extract the images and load them into Docker. Extracting the images might take a few minutes.

    • For Linux® x86_64, run this command:

      tar xf ibm-cloud-private-x86_64-3.1.0.tar.gz -O | sudo docker load
      
    • For Linux® on Power® (ppc64le), run this command:

      tar xf ibm-cloud-private-ppc64le-3.1.0.tar.gz -O | sudo docker load
      
  4. Create an installation directory to store the IBM Cloud Private configuration files in and change to that directory. Note: The installation directory must have at least 50 GB of available disk space for the installation and install files. For example, to store the configuration files in /opt/ibm-cloud-private-3.1.0, run the following commands:

     sudo mkdir /opt/ibm-cloud-private-3.1.0;
     cd /opt/ibm-cloud-private-3.1.0
    
  5. Extract the configuration files from the installer image.

    • For Linux® x86_64, run this command:

       sudo docker run -v $(pwd):/data -e LICENSE=accept ibmcom/icp-inception-amd64:3.1.0-ee cp -r cluster /data
      
    • For Linux® on Power® (ppc64le), run this command:

       sudo docker run -v $(pwd):/data -e LICENSE=accept ibmcom/icp-inception-ppc64le:3.1.0-ee cp -r cluster /data
      

      A cluster directory is created inside your installation directory. For example, if your installation directory is /opt/ibm-cloud-private-3.1.0, the /opt/ibm-cloud-private-3.1.0/cluster folder is created. For an overview of the cluster directory structure, see Cluster directory structure.

  6. (Optional) You can view the license file for IBM Cloud Private.

    • For Linux® x86_64, run this command:

       sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-amd64:3.1.0-ee
      
    • For Linux® on Power® (ppc64le), run this command:

       sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-ppc64le:3.1.0-ee
      

      Where $LANG is a supported language format. For example, to view the license in Simplified Chinese using Linux® x86_64, run the following command:

      sudo docker run -e LICENSE=view -e LANG=zh_CN ibmcom/icp-inception-amd64:3.1.0-ee
      

      For a list of supported language formats, see Supported languages.

  7. Create a secure connection from the boot node to all other nodes in your cluster. Complete one of the following processes:

  8. Add the IP address of each node in the cluster to the /<installation_directory>/cluster/hosts file. See Setting the node roles in the hosts file. You can also define customized host groups, see Defining custom host groups.

    Note: Worker nodes can support mixed architectures. You can add worker nodes into a single cluster that run on Linux® x86_64, Linux® on Power® (ppc64le) and IBM® Z platforms.

  9. If you use SSH keys to secure your cluster, in the /<installation_directory>/cluster folder, replace the ssh_key file with the private key file that is used to communicate with the other cluster nodes. See Sharing SSH keys among cluster nodes. Run this command:

    sudo cp ~/.ssh/id_rsa ./cluster/ssh_key
    

    In this example, ~/.ssh/id_rsa is the location and name of the private key file.

  10. For IBM Cloud Private only: Move the image files for your cluster to the /<installation_directory>/cluster/images folder.

    1. Create an images directory:

      sudo mkdir -p cluster/images
      
    2. If your cluster contains the x86_64 node, place the x86 package in the images directory:

      sudo mv /<path_to_installation_file>/ibm-cloud-private-x86_64-3.1.0.tar.gz  cluster/images/
      
    3. If your cluster contains the ppc64le node, place the ppc64le package in the images directory:

      sudo mv /<path_to_installation_file>/ibm-cloud-private-ppc64le-3.1.0.tar.gz  cluster/images/
      
    4. If your cluster contains the s390x node, place the s390x package in the images directory:

      sudo mv /<path_to_installation_file>/ibm-cloud-private-s390x-3.1.0.tar.gz  cluster/images/
      

    In these command, path_to_installation_file is the path to the images file.

  11. (Optional) If you plan to add IBM® Z worker and proxy nodes to your cluster, run this command:

    sudo mv /<path_to_installation_file>/ibm-cloud-private-s390x-3.1.0.tar.gz cluster/images/
    

Step 3: Customize your cluster

  1. Set up resource limits for proxy nodes. See Configuring process resource limit on proxy nodes.
  2. You can also set a variety of optional cluster customizations that are available in the /<installation_directory>/cluster/config.yaml file. See Customizing the cluster with the config.yaml file. For additional customizations, you can also review Customizing your installation.
  3. In an environment that has multiple network interfaces (NICs), such as OpenStack and AWS, you must add the following code to the config.yaml file:

     cluster_lb_address: <external address>
     proxy_lb_address: <external address>
    

    The <external address> value is the IP address, fully qualified domain name, or OpenStack floating IP address that manages communication to external services. Setting the proxy_lb_address parameter is required for proxy HA environments only.

  4. For HA environments, see HA settings.

Step 4: Set up Docker for your cluster nodes

Cluster nodes are the master, worker, proxy, and management nodes. See, Architecture.

You need a version of Docker that is supported by IBM Cloud Private installed on your cluster node. See Supported Docker versions.

If you do not have supported version of Docker installed on your cluster nodes, IBM Cloud Private can automatically install Docker on your cluster nodes during the installation.

To prepare your cluster nodes for automatic installation of Docker, see Configuring cluster nodes for automatic Docker installation.

Step 5: Deploy the environment

  1. Change to the cluster folder in your installation directory.

     cd ./cluster
    
  2. Deploy your environment. Depending on your options, you might need to add more parameters to the deployment command.

    • If you had specified the offline_pkg_copy_path parameter in the config.yaml file. In the deployment command, add the -e ANSIBLE_REMOTE_TEMP=<offline_pkg_copy_path> option, where <offline_pkg_copy_path> is the value of the offline_pkg_copy_path parameter that you set in the config.yaml file.
    • By default, the command to deploy your environment is set to deploy 15 nodes at a time. If your cluster has more than 15 nodes, the deployment might take a longer time to finish. If you want to speed up the deployment, you can specify a higher number of nodes to be deployed at a time. Use the argument -f <number of nodes to deploy> with the command.

    To deploy your environment:

    • For Linux® x86_64, run the following command:

       sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.0-ee install
      
    • For Linux® on Power® (ppc64le), run the following command:

       sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-ppc64le:3.1.0-ee install
      

    Note: If you encounter errors during deployment, rerun the deployment command with -v to collect other error messages. If you continue to receive errors during the rerun, run the following command to collect the log files:

    • For Linux® x86_64, run the following command:

       sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.1.0-ee healthcheck
      
    • For Linux® on Power® (ppc64le), run the following command:

       sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-ppc64le:3.1.0-ee healthcheck
      

      The log files are located in the cluster/logs directory.

Verify the status of your installation

If the installation succeeded, the access information for your cluster is displayed.

The URL can be either https://<ip_address>:8443 or https://master_ip:8443, where master_ip is the IP address of the master node for your cluster.

If you specified a cluster_lb_address value in your config.yaml file, the <ip_address> is the cluster_lb_address address. If you did not specify that value, in HA clusters, the <ip_address> in this message is the cluster_vip address that you specified, and in standard clusters, it is the IP address of the master node.

Note: If you created your cluster within a private network, use the public IP address of the master node to access the cluster.

Access your cluster

Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.

Postinstallation tasks

  1. Restart your firewall.
  2. Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.
  3. Back up the boot node. Copy your /<installation_directory>/cluster directory to a secure location. If you use SSH keys to secure your cluster, ensure that the SSH keys in the backup directory remain in sync.
  4. Install other software from your bundle. See Installing bundled products.