Upgrading IBM Cloud Private

You can upgrade IBM Cloud Private from specific previous versions.

Note: If you followed the instructions in Authentication onboarding and single sign-on to customize your console URL, then you must also customize your console URL after your fix pack version upgrade.

Supported upgrade paths

IBM Cloud Private supports upgrade from 3.2.1.2008, 3.2.1.2006, and 3.2.1.2003 to 3.2.2.2105. However, if you are one of these 3.2.1.x fix pack version you cannot directly upgrade to the 3.2.2.2105 fix pack. You must first upgrade to the 3.2.2.2008 fix pack (Kubernetes version 1.16.7). Then, you can upgrade to the 3.2.2.2105 fix pack, which upgrades Kubernetes to version 1.19.3. IBM Cloud Private does not support direct upgrades from 3.2.1.2012, 3.2.1.2105, or 3.2.1.2203 to 3.2.2.2105 or 3.2.2.2203.

Version you are upgrading from Version you are upgrading to Procedure
3.2.2.2105 3.2.2.2203 Applying fix pack
3.2.2.2012 3.2.2.2105 or 3.2.2.2203 Applying fix pack
3.2.2.2008 3.2.2.2012 or 3.2.2.2105 or 3.2.2.2203 Upgrade
3.2.2.2006 3.2.2.2008 Applying fix pack
3.2.1.2003, 3.2.1.2006, or 3.2.1.2008 3.2.2.2008 Applying fix pack
3.2.1.x fix pack prior to 3.2.1.2003, 3.2.1 version 3.2.1.2003 Applying fix pack
Any version 3.2.0 fix pack 3.2.1 or to 3.2.1.2203 (fix pack) Upgrade
Version 3.1.2 3.2.1 or to 3.2.1.2203 (fix pack) Upgrade
Version 3.1.1 3.2.1 or to 3.2.1.2203 (fix pack) Upgrade
Version 3.1.0 3.2.1 or to 3.2.1.2203 (fix pack) Upgrade

Important:

Upgrading to a newer fix pack

Currently, there are two fix pack versions available, 3.2.1.x fix packs and 3.2.2.x fix packs.

The latest 3.2.1.x fix pack is 3.2.1.2203. The latest 3.2.2.x fix pack is 3.2.2.2203 and upgrades Kubernetes to version 1.19.3.

Upgrading to a newer 3.2.1 fix pack

If you need to late apply a 3.2.1 fix pack to version 3.2.1 or a 3.2.1.x fix pack version, you can use the apply fix pack command to upgrade to the newer 3.2.1.x fix pack. For more information, see Applying fix packs to your cluster.

Upgrading to 3.2.2.x fix pack versions

You can upgrade to a 3.2.2.x fix pack version using the apply fix pack command or the upgrade procedure depending on what version you are starting from and the 3.2.2.x version to which you are upgrading. Depending on your current version, you might need to complete multiple upgrades to reach the latest fix pack version.

The following table shows the upgrade paths and procedures for upgrading to 3.2.2.x fix packs:

Version you are upgrading from Version you are upgrading to Procedure
3.2.2.2105 3.2.2.2203 Applying fix pack
3.2.2.2012 3.2.2.2105 or 3.2.2.2203 Applying fix pack
3.2.2.2008 3.2.2.2012 or 3.2.2.2105 or 3.2.2.2203 Upgrade
3.2.2.2006 3.2.2.2008 Applying fix pack
3.2.1.2003, 3.2.1.2006, or 3.2.1.2008 3.2.2.2008 Applying fix pack
3.2.1.x fix pack prior to 3.2.1.2003, 3.2.1 version 3.2.1.2003 Applying fix pack
Any version 3.2.0 fix pack 3.2.1 or to 3.2.1.2203 (fix pack) Upgrade
Version 3.1.2 3.2.1 or to 3.2.1.2203 (fix pack) Upgrade
Version 3.1.1 3.2.1 or to 3.2.1.2203 (fix pack) Upgrade
Version 3.1.0 3.2.1 or to 3.2.1.2203 (fix pack) Upgrade

Note: To upgrade to the 3.2.2.2203 or earlier 3.2.2.x fix pack, you must upgrade to the 3.2.1.2003 fix pack version first. You cannot directly upgrade to a 3.2.2.x fix pack version, which includes an updated version of Kubernetes, from an earlier version of IBM Cloud Private. After you upgrade to the 3.2.1.2003 or newer 3.2.1.x fix pack version, you can apply the 3.2.2.2006 or 3.2.2.2008 fix pack to upgrade Kubernetes to version 1.16.7. Then, you can upgrade to the 3.2.2.2012 fix pack, which further updates Kubernetes to version 1.19.3.

About this task

Bring your own certificate

If you used your own certificate in the cluster that you are upgrading, verify whether the 127.0.0.1 IP address is added in the certificate Subject Alternative Names (SAN) list. If the IP address is not included, regenerate the certificate by adding the following IP addresses and hostnames to the SAN list:

   127.0.0.1
   <cluster_CA_domain>
   <cluster_lb_address>
   <cluster_vip>

To support the use of your own certificate, complete the following steps before you upgrade your cluster:

  1. Create the cfc-certs/router directory for your certificates by running the following command:

    mkdir -p <installation_directory>/cluster/cfc-certs/router
    
  2. Move your certificate authority (CA) certificate and other certificates to the cfc-certs/router directory by running the following commands:

    mv <your-key> <installation_directory>/cluster/cfc-certs/router/icp-router.key
    mv <your-crt> <installation_directory>/cluster/cfc-certs/router/icp-router.crt
    mv <your-ca-cert> <installation_directory>/cluster/cfc-certs/router/icp-router-ca.crt
    
  3. Update the <installation_directory>/cluster/config.yaml file to enable bringing your own (BYO) certificates. Set the value for the use_byo_certs configuration to true. The following example shows this configuration:

    ## Whether to use BYO certs for ICP management services
    use_byo_certs: true
    

If you need to add extra charts to a local or management repository, you need to add the repository to the Helm CLI as an external repository. For more information, see Adding the internal Helm repository to Helm CLI. When you are following the procedure, use your own certificates in the commands that require you to specify certificates as parameters.

Upgrading

  1. Log in to the boot node as a user with root permissions. The boot node is usually your master node. For more information about node types, see Architecture. During installation, you specify the IP addresses for each node type.

  2. Download the installation files for your upgrade:

    • If you are upgrading to IBM Cloud Private version 3.2.1, these files are available for download from the IBM Passport Advantage® Opens in a new tab website.

      • For a Linux® x86_64 cluster, download the ibm-cloud-private-x86_64-3.2.1.tar.gz file.
      • For a Linux® on Power® (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.2.1.tar.gz file.
      • For a IBM® Z cluster, download the ibm-cloud-private-s390x-3.2.1.tar.gz file.
    • If you are upgrading to an IBM Cloud Private fix pack version, these files are available for download from the IBM® Fix Central Opens in a new tab website.

    For downloading the 3.2.1.2203 fix pack:

    • For a Linux® x86_64 cluster, download the ibm-cloud-private-x86_64-{site.data.keyword.version_fixpack}}.tar.gz file.
    • For a Linux® on Power® (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.2.1.2203.tar.gz file.
    • For a IBM® Z cluster, download the ibm-cloud-private-s390x-3.2.1.2203.tar.gz file.

    For downloading the 3.2.1.2003 fix pack to prepare for applying a 3.2.2.x fix pack:

    • For a Linux® x86_64 cluster, download the ibm-cloud-private-x86_64-3.2.1.2003.tar.gz file.
    • For a Linux® on Power® (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.2.1.2003.tar.gz file.
    • For a IBM® Z cluster, download the ibm-cloud-private-s390x-3.2.1.2003.tar.gz file.

    For downloading the 3.2.2.2105 fix pack to upgrade from the 3.2.2.2008 or 3.2.2.2006 fix pack:

    • For a Linux® x86_64 cluster, download the ibm-cloud-private-x86_64-3.2.2.2105.tar.gz file.
    • For a Linux® on Power® (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.2.2.2105.tar.gz file.
    • For a IBM® Z cluster, download the ibm-cloud-private-s390x-3.2.2.2105.tar.gz file.

    For downloading the 3.2.2.2203 fix pack to upgrade from the 3.2.2.2105 or 3.2.2.2008 fix pack:

    • For a Linux® x86_64 cluster, download the ibm-cloud-private-x86_64-3.2.2.2203.tar.gz file.
    • For a Linux® on Power® (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.2.2.2203.tar.gz file.
    • For a IBM® Z cluster, download the ibm-cloud-private-s390x-3.2.2.2203.tar.gz file.
  3. Extract the images and load them into Docker. Extracting the images might take a few minutes.

    Note: If you are installing IBM Cloud Private with a fix pack, replace the file name in the following commands with the file name for the fix pack installation file that you downloaded.

    • Run the following command. These commands use the filename for upgrading to IBM Cloud Private version 3.2.1 as an example:

      • For a Linux x86_64 cluster, use the command:

        tar xf ibm-cloud-private-x86_64-3.2.1.tar.gz -O | sudo docker load
        
      • For a Linux on Power (ppc64le) cluster, use the command:

        tar xf ibm-cloud-private-ppc64le-3.2.1.tar.gz -O | sudo docker load
        
      • For a Linux on IBM Z and LinuxONE cluster, use the command:

        tar xf ibm-cloud-private-s390x-3.2.1.tar.gz -O | sudo docker load
        
  4. Create an installation directory and copy the cluster directories from the previous installation directory to the new IBM Cloud Private cluster folder. Use a different installation directory than you used for the previous version. For example, to store the configuration files in /opt/ibm-cloud-private-3.2.1, run the following commands:

     sudo mkdir -p /opt/ibm-cloud-private-3.2.1
     cd /opt/ibm-cloud-private-3.2.1
     sudo cp -r /<installation_directory>/cluster .
     sudo rm -rf cluster/.upgrade
    

    Note: /<installation_directory> is the full path to your version 3.2.0 installation directory, and /<new_installation_directory> is the full path to your version 3.2.1 installation directory. It is not required to copy the entire image installation package from the previous version.

  5. Check the calico_ipip_enabled parameter value in the version that you are upgrading from.

    • If the parameter was set as calico_ipip_enabled: true, replace the parameter in the /<new_installation_directory>/cluster/config.yaml with calico_ipip_mode: Always.
    • If the parameter was set as calico_ipip_enabled: false, replace the parameter in the /<new_installation_directory>/cluster/config.yaml with calico_ipip_mode: Never.
  6. Move the image files for your cluster to the /<new_installation_directory>/cluster/images folder.

    • For Linux x86_64, run the following command:

      sudo mv /<path_to_images_file>/ibm-cloud-private-x86_64-3.2.1.tar.gz cluster/images/
      
    • For Linux on Power (ppc64le), run the following command:

      sudo mv /<path_to_images_file>/ibm-cloud-private-ppc64le-3.2.1.tar.gz cluster/images/
      
    • For a Linux on IBM Z and LinuxONE cluster, run the following command:

      sudo mv /<path_to_images_file>/ibm-cloud-private-s390x-3.2.1.tar.gz cluster/images/
      

      In this command, path_to_images_file is the path to the images file.

  7. Check the management_services section in your config.yaml file and disable multicluster-hub if you did not previously configure IBM Multicloud Manager. See the following example:

     management_services:
       multicluster-hub: disabled
    

    Note: With IBM Multicloud Manager disabled, some of the Search features are also disabled.

  8. For the Linux® on Power® (ppc64le) environment only: Disable the cluster-api-provider-iks service and the cluster-api-provider-aks service in the config.yaml file. They are not supported in the Linux® on Power® (ppc64le) environment. They are disabled during an installation when you use the power.config.yaml file for configuration, but they must be manually disabled before an upgrade.

    In the management_services section of the config.yaml file, set the following services to disabled.

    management_services:
      cluster-api-provider-iks: disabled
      cluster-api-provider-aks: disabled
    
  9. For upgrades from an IBM Cloud Private 3.2.0 fix pack level: Delete any existing platform-header-kubectl job in the cluster. The platform-header-kubectl job is a hook for the platform-ui chart and can potentially block an upgrade by preventing a new job, which is required for the upgrade, from being generated. Run the following command to delete the job:

    kubectl delete job platform-header-kubectl -n kube-system --ignore-not-found
    
  10. Deploy your environment by completing the following steps:

    1. If you enabled single sign-on (SSO) in a previous release, you must disable it before you upgrade your cluster. For more information about disabling SSO, see Configuring single sign-on. If you do not disable SSO before upgrade, or if the platform-auth container fails to start after you upgrade your cluster, see platform-auth container fails to start.

    2. Change to the cluster folder in your installation directory.

      cd /<new_installation_directory>/cluster
      
    3. Prepare the cluster for upgrade:

      For upgrading to IBM Cloud Private version 3.2.1, run the following command:

      • For Linux x86_64:
        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-amd64:3.2.1-ee upgrade-prepare
        
      • For Linux on Power (ppc64le):
        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-ppc64le:3.2.1-ee upgrade-prepare
        
      • For Linux on IBM Z and LinuxONE:
        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-s390x:3.2.1-ee upgrade-prepare
        

      For upgrading to an IBM Cloud Private fix pack version, run the following command. These commands use the 3.2.1.2203 fix pack as an example.

      If you are upgrading to the 3.2.1.2003 fix pack to prepare for applying a 3.2.2.x fix pack, replace 3.2.1.2203 with 3.2.1.2003 in the command that you run. If you are upgrading to the 3.2.2.2203 fix pack to upgrade from the 3.2.2.2105 or 3.2.2.2008 fix pack, replace the value with 3.2.2.2203.

      • For Linux x86_64:
        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-amd64:3.2.1.2203-ee upgrade-prepare
        
      • For Linux on Power (ppc64le):
        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-ppc64le:3.2.1.2203-ee upgrade-prepare
        
      • For Linux on IBM Z and LinuxONE:
        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-s390x:3.2.1.2203-ee upgrade-prepare
        

      If the cluster preparation fails, review the error message and resolve any issues. Then, run the upgrade-prepare command again.

      1. Upgrade Kubernetes:

        For upgrading to IBM Cloud Private version 3.2.1, run the following command:

        • For Linux x86_64:
          sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
          ibmcom/icp-inception-amd64:3.2.1-ee upgrade-k8s
          
        • For Linux on Power (ppc64le):
          sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
          ibmcom/icp-inception-ppc64le:3.2.1-ee upgrade-k8s
          
        • For Linux on IBM Z and LinuxONE:
          sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
          ibmcom/icp-inception-s390x:3.2.1-ee upgrade-k8s
          

        For upgrading to an IBM Cloud Private fix pack version, run the following command. These commands use the 3.2.1.2203 fix pack as an example.

        If you are upgrading to the 3.2.1.2003 fix pack to prepare for applying a 3.2.2.x fix pack, replace 3.2.1.2203 with 3.2.1.2003 in the command that you run. If you are upgrading to the 3.2.2.2203 fix pack to upgrade from the 3.2.2.2105 or 3.2.2.2008 fix pack, replace the value with 3.2.2.2203.

        • For Linux x86_64:
          sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
          ibmcom/icp-inception-amd64:3.2.1.2203-ee upgrade-k8s
          
        • For Linux on Power (ppc64le):
          sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
          ibmcom/icp-inception-ppc64le:3.2.1.2203-ee upgrade-k8s
          
        • For Linux on IBM Z and LinuxONE:
          sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
          ibmcom/icp-inception-s390x:3.2.1.2203-ee upgrade-k8s
          

        If the Kubernetes upgrade fails with a different message, review the error message and resolve any issues. Then, run the upgrade Kubernetes services command again.

    4. Upgrade the charts:

      For upgrading to IBM Cloud Private version 3.2.1, run the following command:

      • For Linux x86_64:

        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-amd64:3.2.1-ee upgrade-chart
        
      • For Linux on Power (ppc64le):

        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-ppc64le:3.2.1-ee upgrade-chart
        
      • For Linux on IBM Z and LinuxONE:

        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-s390x:3.2.1-ee upgrade-chart
        

      For upgrading to an IBM Cloud Private fix pack version, run the following command. These commands use the 3.2.1.2203 fix pack as an example.

      If you are upgrading to the 3.2.1.2003 fix pack to prepare for applying a 3.2.2.x fix pack, replace 3.2.1.2203 with 3.2.1.2003 in the command that you run. If you are upgrading to the 3.2.2.2203 fix pack to upgrade from the 3.2.2.2105 or 3.2.2.2008 fix pack, replace the value with 3.2.2.2203.

      • For Linux x86_64:

        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-amd64:3.2.1.2203-ee upgrade-chart
        
      • For Linux on Power (ppc64le):

        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-ppc64le:3.2.1.2203-ee upgrade-chart
        
      • For Linux on IBM Z and LinuxONE:

        sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
        ibmcom/icp-inception-s390x:3.2.1.2203-ee upgrade-chart
        

      If the chart upgrade fails with a different message, review the error message and resolve any issues. Then, re-run the upgrade chart command again.

    5. If GlusterFS is installed in your cluster, you must upgrade the GlusterFS client to version 4.1.5.

    6. Update NGINX ingress rewrite-target annotation. For more information, see NGINX ingress rewrite-target annotation fails when you upgrade to IBM Cloud Private Version 3.2.1.

  11. Verify the status of your upgrade.

    • If the upgrade succeeded, the access information for your cluster is displayed:

      UI URL is https://<Cluster Master Host>:<Cluster Master API Port>
      

      The <Cluster Master Host>:<Cluster Master API Port> value is defined in Master endpoint.

    • If you encounter errors, see Troubleshooting.

  12. Clear your browser cache.

Post-upgrade

  1. If you customized the oidc-issuer-url before upgrade, you must add the customized oidc-issuer-url value to the redirect_uris in the platform-oidc-registration.json and reregister the OIDC client after you upgrade your cluster. For more information, see Customizing the cluster access URL.

  2. If you have either applications that use GPU resources or a resource quota for GPU resources, you need to manually update the application or resource quota with the new GPU resource name nvidia.com/gpu.

    • For applications that use GPU resources, follow the steps in Creating a deployment with attached GPU resources to run a sample GPU application. For your own GPU application, you need to update the application to use the new GPU resource name nvidia.com/gpu. For example, to update the deployment properties, you can use either the management console (see Modifying a deployment) or the kubectl CLI.
    • To update the resource quota for GPU resources, follow the steps in Setting resource quota to set a resource quota for your namespace. For upgrading, you need to update the resource quota to use the GPU resource name nvidia.com/gpu. For example, you can set the GPU quota to requests.nvidia.com/gpu: "2".
  3. Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.

  4. Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.

  5. Back up the boot node. Copy your /<new_installation_directory>/cluster directory to a secure location.

  6. Clean up the obsolete charts. In the IBM Cloud Private 3.2.1 release, some charts, such as mariadb, unified-router, and heapster are removed; but are not automatically deleted in case you want to revert to the previous release. Only after you verify the upgrade and the cluster is operating well, run the following command to remove the obsolete charts:

     helm delete --purge --tls --timeout=600  mariadb
     helm delete --purge --tls --timeout=600  heapster
     helm delete --purge --tls --timeout=600  unified-router
    

    Note: All persistent volumes and persistent volume claims for mariadb should be manually deleted following the removal of its chart:

     kubectl delete pvc -n kube-system <mariadb_pvc>
     kubectl delete pv <mariadb_pv>
    
  7. If you disabled single sign-on (SSO), re-enable it. For more information, see Configuring single sign-on.

  8. If you upgraded to the 3.2.2.2203 or earlier 3.2.2.x fix pack, which includes an updated version of Kubernetes, you might need to complete some additional steps:

    • If your environment includes application, the application deployables might now have a Failed status. To resolve this issue, restart any appmgr pods (multicluster-endpoint/endpoint-appmgr-<string>) pods on the managed clusters that include failed deployables to refresh the status.

    • After you upgrade to the fix pack, the vulnerability-advisor-cos-indexer pod might be in CrashLoopBackOff status. This status can occur due to an inconsistent pod restart of Kafka and MinIO. To resolve this issue and refresh the pod status, restart the following pods:

      • vulnerability-advisor-cos-indexer
      • vulnerability-advisor-minio
      • vulnerability-advisor-kafka

      To restart and reconfigure the pods, run the following command for each pod. Replace pod_name with the name of the pod to delete and restart.

      kubectl delete pod pod_name -n kube-system