Installing IBM® Cloud Private-CE

Set up IBM® Cloud Private-CE (Community Edition) master, worker, and proxy nodes in your cluster.

Before you install IBM® Cloud Private-CE, prepare your cluster. See Configuring your cluster.

Follow these steps to install IBM® Cloud Private-CE master, worker, and proxy nodes. Run these steps from your boot node. For more information about node types, see the IBM® Cloud Private-CE Architecture.

You must log in to the boot node with a user account with root permission to install an IBM® Cloud Private-CE cluster.

Set up the installation environment

  1. Log in to the boot node as a user with root permissions. The boot node is usually your master node. For more information about node types, see Architecture. During installation, you specify the IP addresses for each node type.
  2. Download the IBM® Cloud Private-CE installer image.

    • For Linux™ 64-bit, run this command:

      sudo docker pull ibmcom/icp-inception:2.1.0
      
    • For Linux™ on Power® 64-bit LE, run this command:

      sudo docker pull ibmcom/icp-inception-ppc64le:2.1.0
      
  3. Change to an installation directory, such as /opt/ibm-cloud-private-ce-2.1.0.

    cd /<installation_directory>
    
  4. Extract the configuration files.

    • For Linux™ 64-bit, run this command:

      sudo docker run -e LICENSE=accept \
      -v "$(pwd)":/data ibmcom/icp-inception:2.1.0 cp -r cluster /data
      
    • For Linux™ on Power® 64-bit LE, run this command:

      sudo docker run -e LICENSE=accept \
      -v "$(pwd)":/data ibmcom/icp-inception-ppc64le:2.1.0 cp -r cluster /data
      

    A cluster directory is created inside your installation directory. For example, if your installation directory is /opt, the /opt/cluster folder is created. The cluster directory contains the following files:

    • config.yaml: The configuration settings that are used to install IBM® Cloud Private to your cluster.

    • hosts: The definition of the nodes in your cluster.

    • misc/storage_class: A folder that contains the dynamic storage class definitions for your cluster.

    • ssh_key: A placeholder file for the SSH private key that is used to communicate with other nodes in the cluster.

  5. Create a secure connection from the boot node to all other nodes in your cluster. Complete one of the following processes:
  6. Modify the /<installation_directory>/cluster/hosts.

    1. Add the IP address of each node in the cluster to the /<installation_directory>/cluster/hosts file. See Hosts file.

      Note: Worker nodes can support mixed architectures. You can add worker nodes into a single cluster that run on Linux™ 64-bit, Linux™ on Power® 64-bit LE and IBM® Z platforms.

    2. Set the pod eviction and deletion policies for the master, proxy, and management nodes. You must modify the Kubernetes settings on your nodes to prevent it from deleting required system components. See Configure nodes to avoid pod eviction.

  7. If you use SSH keys to secure your cluster, in the /<installation_directory>/cluster folder, replace the ssh_key file with the private key file that is used to communicate with the other cluster nodes. See Sharing SSH keys among cluster nodes. Run this command:

    sudo cp ~/.ssh/id_rsa /<installation_directory>/cluster/ssh_key
    

    In this example, ~/.ssh/id_rsa is the location and name of the private key file.

Customize your cluster

You can complete most of your cluster customization in the /<installation_directory>/cluster/config.yaml file. To review a full list of parameters that are available to customize, see Cluster configuration settings.

You can also set node-specific parameters values in the /<installation_directory>/cluster/hosts file. However, parameter values that are set in the config.yaml file have the highest priority during an installation. To set a parameter value in the hosts file, you must remove the parameter from the config.yaml file. For more information about setting node-specific parameter values in the hosts file, see Hosts file.

  1. In an environment that has multiple network interfaces (NICs), such as OpenStack and AWS, ensure that you add the following code to the config.yaml file:

    cluster_access_ip: <external IP address>
    

    For more information about network setting, see Table 4: Network settings.

  2. (Optional) Enable the Kibana dashboard for logging. See Enabling Kibana during installation.
  3. (Optional) Specify a certificate authority (CA) for your cluster. See Specifying your own certificate authority (CA) for IBM® Cloud Private services.
  4. (Optional) Set up a federation. See Table 8: Federation settings. This feature is available as a technology preview only.
  5. (Optional) Provision GlusterFS storage on worker nodes. See GlusterFS storage.
  6. (Optional) Configure vSphere cloud provider. See Configuring a vSphere cloud provider.
  7. (Optional) Create one or more storage classes for the storage provisioners in your environment. See Dynamic storage provisioning.
  8. (Optional) Encrypt cluster data network traffic with IPsec. See Encrypting cluster data network traffic with IPsec.
  9. (Optional) Encrypt the etcd key value store. See Encrypting volumes by using eCryptfs. Note: In IBM® Cloud Private Version 2.1.0, volume encryption is not supported on Linux™ on Power® 64-bit LE.

Deploy the environment

  1. Change to the cluster folder in your installation directory.

    cd /<installation_directory>/cluster
    
  2. Deploy your environment.

    • For Linux™ 64-bit, run this command:

      sudo docker run -e LICENSE=accept --net=host \
      -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception:2.1.0 install
      
    • For Linux™ on Power® 64-bit LE, run this command:

      sudo docker run -e LICENSE=accept --net=host \
      -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception-ppc64le:2.1.0 install
      
  3. Verify the status of your installation.

    • If the installation succeeded, the access information for your cluster is displayed:

      UI URL is https://master_ip:8443 , default username/password is admin/admin
      

      In this message, master_ip is the IP address of the master node for your IBM® Cloud Private-CE cluster.

      Note: If you created your cluster within a private network, use the public IP address for the master node to access the cluster.

    • If you encounter errors during installation, see Troubleshooting.

Access your cluster

  1. Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.

    • For more information about accessing your cluster by using the IBM® Cloud Private-CE management console from a web browser, see Accessing your IBM® Cloud Private cluster by using the management console.
    • For more information about accessing your cluster by using the Kubernetes command line (kubectl), see Accessing your IBM® Cloud Private cluster by using the kubectl CLI.

      Note: If you are unable to log in immediately after the installation completes, it might be because the management services are not ready. Wait for a few minutes and try again.

      Note: You might see a 502 Bad Gateway message when you open a page in the management console shortly after installation. If you do, the nginx service has not started all components. The pages load after all components start.

Post installation tasks

  1. Ensure that all the IBM® Cloud Private-CE default ports are open. For more information about the default IBM® Cloud Private-CE ports, see Default ports.
  2. Back up the boot node. Copy your /<installation_directory>/cluster directory to a more secure location.