Installing IBM Cloud Private with OpenShift

You can install IBM Cloud Private with OpenShift by using the IBM Cloud Private installer.

Installation can be completed in four main steps:

  1. Configure the boot node
  2. Set up the installation environment
  3. Configure your cluster
  4. Run the IBM Cloud Private installer
  5. Post installation tasks

Configure the boot node

The IBM Cloud Private with OpenShift installer can run from either a dedicated boot node or an OpenShift master node. If the boot node is not an OpenShift node, install Docker for your boot node only.

The boot node is the node that is used for the installation of your cluster. The boot node is usually your master node. For more information about the boot node, see Boot node.

You need a version of Docker that is supported by IBM Cloud Private installed on your boot node. See Supported Docker versions.

For the procedure to install Docker, see Manually installing Docker.

Set up the installation environment

  1. Log in to the boot node as a user with root permissions or as a user with sudo privileges.
  2. Download the installation files for IBM Cloud Private 3.1.0. You must download the correct file or files for the type of nodes in your cluster. You can obtain these files from the IBM Passport Advantage® Opens in a new tab website.

    • For a Linux® x86_64 cluster, download the ibm-cloud-private-rhos-3.1.0.tar.gz file.
  3. Extract the image and load them into Docker. Extracting the images might take a few minutes.

     tar xf ibm-cloud-private-rhos-3.1.0.tar.gz -O | sudo docker load
    
  4. Create an installation directory to store the IBM Cloud Private configuration files in and change to that directory.

    For example, to store the configuration files in /opt/ibm-cloud-private-rhos-3.1.0, run the following commands:

     mkdir /opt/ibm-cloud-private-rhos-3.1.0; \
     cd /opt/ibm-cloud-private-rhos-3.1.0
    
  5. Extract the cluster directory:

    sudo docker run --rm -v $(pwd):/data -e LICENSE=accept ibmcom/icp-inception-amd64:3.1.0-rhel-ee cp -r cluster /data
    
  6. Copy the ibm-cloud-private-rhos-3.1.0.tar.gz to the cluster and images:

    sudo cp ibm-cloud-private-rhos-3.1.0.tar.gz cluster/images
    
  7. Create cluster configuration files. The OpenShift configuration files are found on the OpenShift Master node.

    1. Copy the OpenShift admin.kubeconfig file to the cluster directory. The OpenShift admin.kubeconfig file can be found in the /etc/origin/master/admin.kubeconfig directory:

      sudo cp /etc/origin/master/admin.kubeconfig cluster/kubeconfig
      
    2. Copy the OpenShift SSH key to the cluster directory:

      sudo cp ~/.ssh/id_rsa cluster/ssh_key
      
    3. Copy the OpenShift inventory file to the cluster directory:

      sudo cp openshift-ansible/inventory/hosts cluster/
      

If your Boot node is different from the OpenShift master node, then the above files have to be copied to the boot node.

Configure your cluster

Update the config.yaml file that you extracted in step 5 with the following configurations:

  cluster_CA_domain: <your-openshift-master-fqdn>
  tiller_service_ip: "None"
  mariadb_privileged: "false"
  install_on_openshift: true
  storage_class: <storage class available in OpenShift>

  ## Cluster Router settings
  router_http_port: 5080
  router_https_port: 5443

  ## Nginx Ingress settings
  ingress_http_port: 3080
  ingress_https_port: 3443

In the above sample, you can set the ports to any available free port number.

Update the label on master node with compute role

oc label node <master node host name? node-role.kubernetes.io/compute=true

Run the IBM Cloud Private installer

Run the install-on-openshift command:

sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster ibmcom/icp-inception-amd64:3.1.0-rhel-ee install-on-openshift

If SELinux is enabled on the boot node, run the following command:

  sudo docker run --rm -v $(pwd):/installer/cluster -e LICENSE=accept --security-opt label:disable ibmcom/icp-inception-amd64:3.1.0-rhel-ee install-on-openshift

Access your cluster

Access your cluster by using a different port than the one that was used for standalone IBM Cloud Private. From a web browser, browse to the URL of your cluster. For a list of supported browsers, see Supported browsers.

Post installation tasks

Open firewall port for the tiller-deploy service on the master node for ports 44134 and 44135

To add the firewall rule on the master node to allow TCP traffic for ports 44134 and 44135, run the following commands:

sudo iptables -A OS_FIREWALL_ALLOW -m state --state NEW -p tcp --dport 44134 -j ACCEPT 
sudo iptables -A OS_FIREWALL_ALLOW -m state --state NEW -p tcp --dport 44135 -j ACCEPT

Recreate the Tiller service

For a deployed environment, you have to re-create the Tiller service to assign a service IP by using the following procedure:

  1. Copy the Tiller information into a file named tiller.yaml:

    kubectl -n kube-system get service tiller-deploy -o yaml > tiller.yaml
    
  2. Edit the tiller.yaml file:

    vim tiller.yaml # remove the 'clusterIP: None' section
    
  3. Remove the clusterIP: None section from the tiller.yaml file and save it.

  4. Remove the existing tiller-deploy service:

    kubectl -n kube-system delete service tiller-deploy
    
  5. Commit the changes that you made in the tiller.yaml file for the new service

    kubectl -n kube-system apply -f tiller.yaml
    

New installation: update the config.yaml file

For a new installation, you must add the following configurations under the Configure your cluster section in the config.yaml file that you previously extracted in Step 5:

mariadb_privileged: "false"
  install_on_openshift: true
  storage_class: <storage class available in OpenShift>

## Cluster Router settings
  router_http_port: 5080
  router_https_port: 5443

  ## Nginx Ingress settings
  ingress_http_port: 3080
  ingress_https_port: 3443

Correct the Security Context Constraints

To correct the Security Context Constraints, run the following command:

  kubectl --kubeconfig /etc/origin/master/admin.kubeconfig  patch scc icp-scc -p '{"allowPrivilegedContainer": true}'

The output should resemble the following text:

  # kubectl --kubeconfig /etc/origin/master/admin.kubeconfig  patch scc icp-scc -p '{"allowPrivilegedContainer": true}'
  securitycontextconstraints "icp-scc" patched

After you apply the new Security Context Constraints, you will see the following update:

  # kubectl --kubeconfig /etc/origin/master/admin.kubeconfig get scc icp-scc
  NAME      PRIV      CAPS      SELINUX     RUNASUSER   FSGROUP    SUPGROUP   PRIORITY   READONLYROOTFS   VOLUMES
  icp-scc   true      []        MustRunAs   RunAsAny    RunAsAny   RunAsAny   1          false            [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret]

Fix file permissions

If SELinux is enabled on the master node and if helm-repo and mgmt-repo pods are in error state, run the following commands to fix the file permissions on the master node.

  sudo mkdir -p /var/lib/icp/helmrepo
  sudo mkdir -p /var/lib/icp/mgmtrepo
  sudo chcon -Rt svirt_sandbox_file_t /var/lib/icp/helmrepo
  sudo chcon -Rt svirt_sandbox_file_t /var/lib/icp/mgmtrepo