Installing IBM Cloud Private Cloud Native and Enterprise editions

Use the following steps to install either IBM Cloud Private Native or Enterprise editions.

For either IBM Cloud Private Cloud Native or Enterprise editions, you can install a standard or high availability (HA) cluster.

You can have an IBM Cloud Private cluster that supports Linux® x86_64, Linux® on Power® (ppc64le), and Linux on IBM Z and LinuxONE systems.

Before you install:

Installation can be completed in six main steps:

  1. Install Docker for your boot node only
  2. Set up the installation environment
  3. Customize your cluster
  4. Set up Docker for your cluster nodes
  5. Deploy the environment
  6. Verify the installation

When the installation completes, you can access your cluster and complete post installation tasks.

If you encounter errors during installation, see Troubleshooting install.

Step 1: Install Docker for your boot node only

The boot node is the node that is used for installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node.

You need a version of Docker that is supported by IBM Cloud Private installed on your boot node. See Supported Docker versions. To install Docker, see Manually installing Docker.

If you are installing the 3.2.1.2203 fix pack, you can upgrade the Docker version from 18.09.7 to 19.03.11 after the installation is complete. For more information, see Upgrading Docker.

Step 2: Set up the installation environment

  1. Log in to the boot node as a user with full root permissions.
  2. Download the installation file or image.

    • For IBM Cloud Private Native or Enterprise: Download the correct file or files for the type of nodes in your cluster from the IBM Passport Advantage® Opens in a new tab website.

      • For a Linux x86_64 cluster, download the ibm-cloud-private-x86_64-3.2.1.tar.gz file.
      • For a Linux on Power (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.2.1.tar.gz file.
      • For a IBM® Z cluster, download the ibm-cloud-private-s390x-3.2.1.tar.gz file.
    • For IBM Cloud Private fix pack: Download the correct file or files for the type of nodes in your cluster from the IBM® Fix Central Opens in a new tab website.

      Currently, there are two fix pack versions available, 3.2.1.x fix packs and 3.2.2.x fix packs. The 3.2.1.x fix packs are intended for environments that include Kubernetes version 1.13.12. The 3.2.2.x fix packs include fixes to upgrade the supported version of Kubernetes. The fixes that are included within these 3.2.2.x fix packs include all fixes that are included within the equivalent 3.2.1.x fix pack, except for Kubernetes specific fixes. If you apply a 3.2.2.x fix pack, do not apply an equivalent 3.2.1.x fix pack. The latest 3.2.1.x fix pack is 3.2.1.2203. The latest 3.2.2.x fix pack is 3.2.2.2203 and upgrades Kubernetes to version 1.19.3.

      For downloading the 3.2.2.2203 fix pack:

      • For a Linux® x86_64 cluster, download the ibm-cloud-private-x86_64-3.2.2.2203.tar.gz file.
      • For a Linux® on Power® (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.2.2.2203.tar.gz file.
      • For a IBM® Z cluster, download the ibm-cloud-private-s390x-3.2.2.2203.tar.gz file.

      For downloading the 3.2.1.2203 fix pack:

      • For a Linux® x86_64 cluster, download the ibm-cloud-private-x86_64-{site.data.keyword.version_fixpack}}.tar.gz file.
      • For a Linux® on Power® (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.2.1.2203.tar.gz file.
      • For a IBM® Z cluster, download the ibm-cloud-private-s390x-3.2.1.2203.tar.gz file.

        Important: If you are installing IBM Cloud Private with a fix pack, use the same instructions as an IBM Cloud Private Native or Enterprise installation.

  3. For IBM Cloud Private Native or Enterprise: Extract the images and load them into Docker. Extracting the images can take a few minutes, during which time output is not displayed.

    Note: If you are installing IBM Cloud Private with a fix pack, replace the file name in the following commands with the file name for the fix pack installation file that you downloaded.

    • For IBM Cloud Private Native or Enterprise installation, run the following command:

      • For Linux x86_64, use the command:

        tar xf ibm-cloud-private-x86_64-3.2.1.tar.gz -O | sudo docker load
        
      • For Linux on Power (ppc64le), use the command:

        tar xf ibm-cloud-private-ppc64le-3.2.1.tar.gz -O | sudo docker load
        
      • For Linux on IBM Z and LinuxONE, use the command:
        tar xf ibm-cloud-private-s390x-3.2.1.tar.gz -O | sudo docker load
        
  4. Create an installation directory (/<installation_directory>/) to store the IBM Cloud Private configuration files in and change to that directory.

    Note: The installation directory must have at least 50 GB of available disk space for the installation and install files. For example, to store the IBM Cloud Private configuration files in /opt/ibm-cloud-private-3.2.1, run the following commands:

     sudo mkdir /opt/ibm-cloud-private-3.2.1;
     cd /opt/ibm-cloud-private-3.2.1
    
  5. Extract the configuration files from the installer image.

    • For Linux x86_64, run this command:

      sudo docker run -v $(pwd):/data -e LICENSE=accept \
      ibmcom/icp-inception-amd64:3.2.1-ee \
      cp -r cluster /data
      
    • For Linux on Power (ppc64le), run this command:

      sudo docker run -v $(pwd):/data -e LICENSE=accept \
      ibmcom/icp-inception-ppc64le:3.2.1-ee \
      cp -r cluster /data
      
    • For Linux on IBM Z and LinuxONE, run this command:

      sudo docker run -v $(pwd):/data -e LICENSE=accept \
      ibmcom/icp-inception-s390x:3.2.1-ee \
      cp -r cluster /data
      

      A cluster directory is created inside your installation directory. For example, if your installation directory is /opt/ibm-cloud-private-3.2.1, the /opt/ibm-cloud-private-3.2.1/cluster folder is created. For an overview of the cluster directory structure, see Cluster directory structure.

      Note: By default, the cluster directory is owned by root. If you require the directory to be owned by a different user, run chmod -R on the directory.

  6. Optional: You can view the license file for IBM Cloud Private. For a list of supported language formats, see Supported languages.

    • For Linux x86_64, run this command:

      sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-amd64:3.2.1-ee
      
    • For Linux on Power (ppc64le), run this command:

      sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-ppc64le:3.2.1-ee
      
    • For Linux on IBM Z and LinuxONE, run this command:

      sudo docker run -e LICENSE=view -e LANG=$LANG ibmcom/icp-inception-s390x:3.2.1-ee
      cp -r cluster /data
      

    Note: The $LANG value must be in supported language. For example, to view the license in Simplified Chinese using Linux x86_64 for a IBM Cloud Private Cloud Native or Enterprise installation, run the following command:

       sudo docker run -e LICENSE=view -e LANG=zh_CN ibmcom/icp-inception-amd64:3.2.1-ee
    
  7. Create a secure connection from the boot node to all other nodes in your cluster. Complete one of the following setups:

  8. Add the IP address of each node in the cluster to the /<installation_directory>/cluster/hosts file. See Setting the node roles in the hosts file. You can also define customized host groups, see Defining custom host groups.

    Note: Worker nodes can support mixed architectures. You can add worker nodes into a single cluster that run on Linux x86_64, Linux on Power (ppc64le), and IBM Z platforms. Non-worker nodes support only one type of architecture.

  9. Move the image files for your cluster to the /<installation_directory>/cluster/images folder.

    1. Create an images directory by running the following command:

       sudo mkdir -p /<installation_directory>/cluster/images
      
    2. If your cluster contains any Linux x86_64 nodes, place the x86 package in the images directory. Add the path to your installation image file to the following command and then run the command:

       sudo mv <installation_image_directory>/ibm-cloud-private-x86_64-3.2.1.tar.gz cluster/images/
      
    3. If your cluster contains any Linux on Power (ppc64le) nodes, place the ppc64le package in the images directory. Add the path to your installation image file to the following command and then run the command:

       sudo mv <installation_image_directory>/ibm-cloud-private-ppc64le-3.2.1.tar.gz cluster/images/
      
    4. If your cluster contains any Linux on IBM Z and LinuxONE nodes, place the s390x package in the images directory. Add the path to your installation image file to the following command and then run the command:

       sudo mv <installation_image_directory>/ibm-cloud-private-s390x-3.2.1.tar.gz cluster/images/
      

Step 3: Customize your cluster

The settings in the config.yaml file, which is located in the /<installation_directory>/cluster/ directory, contains all of the configuration settings that are needed to deploy your cluster.

  1. Optional: Replace the config.yaml file for an IBM Power environment and or Linux® on IBM® Z and LinuxONE environment

    • For IBM Power environment only: Replace the config.yaml file with the power.config.yaml file. If you are deploying your cluster into an IBM Power environment, you must use the settings in the power.config.yaml file. Complete the following steps to replace the file:

      1. Enter the following command to rename the existing config.yaml file to config.yaml.orig:

        sudo mv /<installation_directory>/cluster/config.yaml  /<installation_directory>/cluster/config.yaml.orig
        

        Replace installation_directory with the path to your installation directory.

      2. Enter the following command to rename the power.config.yaml file to config.yaml:

        sudo cp /<installation_directory>/cluster/power.config.yaml /<installation_directory>/cluster/config.yaml
        

        Replace installation_directory with the path to your installation directory.

    • For Linux® on IBM® Z and LinuxONE environment only: Replace the config.yaml file with the z.config.yaml file. If you are deploying your cluster into an IBM Z environment, you must use the settings in the z.config.yaml file. Complete the following steps to replace the file:

      1. Run the following command to rename the existing config.yaml file to config.yaml.orig:

        sudo mv /<installation_directory>/cluster/config.yaml  /<installation_directory>/cluster/config.yaml.orig
        

        Replace the installation_directory with the path to your installation directory.

      2. Enter the following command to rename the z.config.yaml file to config.yaml:

        sudo cp /<installation_directory>/cluster/z.config.yaml /<installation_directory>/cluster/config.yaml
        

        Replace installation_directory with the path to your installation directory.

  2. Set up a default password in the config.yaml file that meets the default password enforcement rule '^([a-zA-Z0-9\-]{32,})$'. This rule specifies that a password must meet the following conditions:

    • The password must have a minimum length of 32 characters.
    • The password can include lowercase letters, uppercase letters, numbers, and hyphens.

    To define a custom set of password rules:

    1. Open the /<installation_directory>/cluster/config.yaml file, and set the default_admin_password. The password must satisfy all regular expressions that are specified in password_rules.

    2. Optional: You can define one or more rules as regular expressions in an array list that the password must pass. For example, a rule can state that the password must be longer than a specified number of characters or contain at least one special character, or both. The rules are written as regular expressions that are supported by the Go programming language.

      To define your custom set of password rules, add the password_rule parameter and rule values to the config.yaml file:

      password_rules:
      - `rule value`
      - `rule value`
      

      For example, the following settings define two password rules:

      password_rules:
      - '^.{10,}'             # The password must have a minimum length of 10 characters.
      - '.*[!@#\$%\^&\*].*'   # The password must include at least one of the listed special characters.
      

      To disable the password_rule, add (.*)

      password_rules:
      - '(.*)'
      

      Note: The default_admin_password must match all rules that are defined. If password_rules is not defined, the default_admin_password must meet the default passport enforcement rule '^([a-zA-Z0-9\-]{32,})$'.

  3. Optional: Customize your cluster. To review the full list of parameters that are available to customize, see Customizing the cluster with the config.yaml file. For other types of customizations that must be configured during installation, such as configuring the monitoring service or GlusterFS, review Customizing your installation.

  4. Enable python3 and configure your cluster name and cluster certificate authority (CA) domain within your config.yaml file. Define these settings as the values for the following parameters:

    • General settings
      • ansible_python_interpreter - Set the value to /usr/bin/python3 if you use python3 in your cluster nodes.
    • Cluster access settings
      • cluster_name - The name of your cluster.
      • cluster_CA_domain - The certificate authority (CA) domain to use in your cluster.

    For more information about these configuration settings and other settings that you can configure for your cluster, see Customizing the cluster with the config.yaml file.

  5. Optional: Enable IBM Multicloud Manager from your config.yaml file. By default, the multicluster-hub option is enabled and the single_cluster_mode option is true, which means IBM Multicloud Manager is not configured. You cannot use IBM Multicloud Manager with the single_cluster_mode default true setting.

    For more information and other configuration scenarios for IBM Multicloud Manager, see Configuration options for IBM Multicloud Manager with IBM Cloud Private installation.

  6. In an environment that has multiple network interfaces (NICs), such as OpenStack and AWS, you must add the following code to the config.yaml file:

       cluster_lb_address: <external address>
       proxy_lb_address: <external address>
    

    The <external address> value is the IP address, fully qualified domain name, or OpenStack floating IP address that manages communication to external services. Setting the proxy_lb_address parameter is required for proxy HA environments only.

  7. For HA environments, there are several HA installation options. See HA settings.

Step 4: Set up Docker for your cluster nodes

Cluster nodes are the master, worker, proxy, and management nodes. To learn more, see Architecture.

You need a version of Docker that is supported by IBM Cloud Private installed on your cluster node. See Supported Docker versions. If you do not have a supported version of Docker that is installed on your cluster nodes, IBM Cloud Private can automatically install Docker on your cluster nodes during the installation.

To prepare your cluster nodes for automatic installation of Docker, see Configuring cluster nodes for automatic Docker installation.

Step 5: Deploy the environment

  1. Change to the cluster folder in your installation directory by running the following command:

     cd /<installation_directory>/cluster
    
  2. Optional: Depending on your options, you might need to add more parameters to the deployment command. If you specified the offline_pkg_copy_path parameter in the config.yaml file; in the deployment command, add the -e ANSIBLE_REMOTE_TEMP=<offline_pkg_copy_path> option, where <offline_pkg_copy_path> is the value of the offline_pkg_copy_path parameter that you set in the config.yaml file.

    Note: By default, the command to deploy your environment is set to deploy 15 nodes at a time. If your cluster has more than 15 nodes, the deployment might take a longer time to finish. If you want to speed up the deployment, you can specify a higher number of nodes to be deployed at a time. Use the argument -f <number of nodes to deploy> with the command.

  3. Deploy your environment:

    • For Linux x86_64, run this command:

      sudo docker run --net=host -t -e LICENSE=accept \
      -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.2.1-ee install
      
    • For Linux on Power (ppc64le), run this command:

      sudo docker run --net=host -t -e LICENSE=accept \
      -v "$(pwd)":/installer/cluster ibmcom/icp-inception-ppc64le:3.2.1-ee install
      
    • For Linux on IBM Z and LinuxONE, run this command:

      sudo docker run --net=host -t -e LICENSE=accept \
        -v "$(pwd)":/installer/cluster ibmcom/icp-inception-s390x:3.2.1-ee install
      
  4. Optional: If you encounter errors during deployment, rerun the deployment command with -v to collect other error messages. If you continue to receive errors during the rerun, run the following command to collect the log files:

    • For Linux x86_64, run this command:

       sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-amd64:3.2.1-ee healthcheck
      
    • For Linux on Power (ppc64le), run this command:

       sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-ppc64le:3.2.1-ee healthcheck
      
    • For Linux on IBM Z and LinuxONE, run this command:

       sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception-s390x:3.2.1-ee healthcheck
      

      The log files are located in the cluster/logs directory.

Step 6: Verify the status of your installation

If the installation succeeded, the access information for your cluster is displayed. The URL is https://<Cluster Master Host>:<Cluster Master API Port>, where <Cluster Master Host>:<Cluster Master API Port> is defined in Master endpoint.

Access your cluster

Now you can access your cluster. From a web browser, browse to the URL of your cluster. For a list of supported browsers, see Supported browsers.

Notes:

Post installation tasks

  1. Restart your firewall.

  2. Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.

  3. Back up the boot node. Copy your /<installation_directory>/cluster directory to a secure location. If you use SSH keys to secure your cluster, ensure that the SSH keys in the backup directory remain in sync.

  4. Maintain proper boot node access. The boot node contains authentication information that is used for initial day 1 deployment of IBM Cloud Private and day 2 updates to IBM Cloud Private. Access to the boot node must be limited to only the users with an actual business need. This access must be governed by using enterprise identity management tools for approval, periodic recertification, access revocation on employee termination, or job role changes. The only users who have access to the boot node should be users who have the clusteradmin role for IBM Cloud Private.

  5. Install other software from your bundle. See Installing IBM software onto IBM Cloud Private.

  6. Optional: Review the International Program License Agreement (IPLA) for IBM Cloud Private and IBM Multicloud Manager:

    1. Open the following link: https://www-03.ibm.com/software/sla/sladb.nsf/search?OpenForm
    2. Search for one of the following License Information numbers:
      • L-TKAO-BA3Q8F - IBM Cloud Private 3.2.1
      • L-TKAO-BA3Q3J - IBM Cloud Private Foundation 3.2.1
      • L-ECUN-BALP9Z - IBM Multicloud Manager Enterprise Edition 3.2.1
  7. Optional: Review the Notice file and non-IBM license file for IBM Cloud Private and IBM Multicloud Manager:

    1. Go to the <installation_directory>/cfc/license directory.
    2. Open the Stacked_License_for_ICP_ICP_Foundation_MCMEE_3.2.1.zip file.
    3. Go to the RTF directory and open the notices.rtf and non_ibm_license.rtf files to review the notices and non-IBM license information.