Upgrading from IBM Cloud Private Version 2.1.0.3 to 3.1.0

You can upgrade IBM® Cloud Private from version 2.1.0.3 to 3.1.0.

You can upgrade from only version 2.1.0.3. If you use an earlier version of IBM Cloud Private, you must upgrade to version 2.1.0.3 first. See Upgrading and reverting Opens in a new tab in the IBM Cloud Private Version 2.1.0.3 documentation.

Notes:

During the upgrade process, you cannot access applications nor the IBM Cloud Private management console. You also cannot set cloud provider options, such as configuring a vSphere cloud provider, or choose to use NSX-T.

  1. Log in to the boot node as a user with root permissions. The boot node is usually your master node. For more information about node types, see Architecture. During installation, you specify the IP addresses for each node type.

  2. Download the installation files for IBM Cloud Private. These files are available for download from the IBM Passport Advantage® Opens in a new tab website.

    • For a Linux® x86_64 cluster, download the ibm-cloud-private-x86_64-3.1.0.tar.gz file.
    • For a Linux® on Power® (ppc64le) cluster, download the ibm-cloud-private-ppc64le-3.1.0.tar.gz file.
  3. Extract the images and load them into Docker. Extracting the images might take a few minutes.

    • For Linux® x86_64, run this command:

      tar xf ibm-cloud-private-x86_64-3.1.0.tar.gz -O | sudo docker load
      
    • For Linux® on Power® (ppc64le), run this command:

      tar xf ibm-cloud-private-ppc64le-3.1.0.tar.gz -O | sudo docker load
      
  4. Create an installation directory and copy the cluster directories from the previous installation directory to the new IBM Cloud Private cluster folder. Use a different installation directory than you used for the previous version. For example, to store the configuration files in /opt/ibm-cloud-private-3.1.0, run the following commands:

    mkdir -p /opt/ibm-cloud-private-3.1.0
    cd /opt/ibm-cloud-private-3.1.0
    cp -r /<installation_directory>/cluster .
    

    Note: /<installation_directory> is the full path to your version 2.1.0.3 installation directory, and /<new_installation_directory> is the full path to your version 3.1.0 installation directory.

    Important: If the 2.1.0.3 version was upgraded from 2.1.0.2, you need to delete the upgrade_version file and .upgrade directory in the /<new_installation_directory> directory.

  5. Manual update /<new_installation_directory>/cluster/config.yaml file.

    • In the 3.1.0 release, disabled_management_services is converted into a dict parameter, management_services to allow fine-grained control services. If you have disabled_mangement_services in your cluster/config.yaml file, you need to update the format from the old to the new format and then delete old disabled_maangement_services in your cluster/config.yaml file. See the following example:

      Old format:

      disabled_management_services: ["istio", "vulnerability-advisor", "custom-metrics-adapter"]
      

      New format:

      management_services:
       istio: disabled
       vulnerability-advisor: disabled
       custom-metrics-adapter: disabled
      

      Note: The vulnerability-advisor parameter is disabled by default in the 3.1.0 release. If you enabled vulnerability-advisor in the 2.1.0.3 release, then you need to change the value to enabled, as it is displayed in the following management_services example:

      management_services:
       vulnerability-advisor: enabled
      
    • In the 3.1.0 release, upgrading audit-logging charts and storage-glusterfs charts is not supported. You must disable the audit-logging and storage-glusterfs charts in management_services. Your config.yaml file might resemble the following information:

      management_services:
      audit-logging: disabled
      storage-glusterfs: disabled
      

      The audit logging chart can be installed after capacity planning for auditing is done. It also requires that logging is deployed to the kube-system namespace with security enabled. In 2.1.0.3, logging is deployed to the kube-system namespace without security enabled. So, to deploy an audit-logging chart, logging must be uninstalled and then reinstalled in 3.1.0.

      After you upgrade to 3.1.0, complete the following steps to deploy an audit-logging chart:

      1. Log in to the IBM Cloud Private CLI, or cloudctl:

        cloudctl login -a https://mycluster.icp:8443 -u admin --skip-ssl-validation
        
      2. Get the logging values by running the following command:

        helm get --tls values logging > old-values.yaml
        
      3. Create a text file named new-values.yaml with the following contents:

        logstash:
          heapSize: "512m"
          memoryLimit: "1024Mi"
        elasticsearch:
          client:
            heapSize: "1024m"
            memoryLimit: "1536Mi"
          data:
            heapSize: "1536m"
            memoryLimit: "3072M"
            storage:
              persistent: true
          master:
            heapSize: "1024m"
            memoryLimit: "1536Mi"
        kibana:
          install: true
        security:
          enabled: true
          ca:
            origin: external
            external:
              secretName: cluster-ca-cert
              certFieldName: tls.crt
              keyFieldName: tls.key
        general:
          mode: managed
          nameOverride: elk
        
      4. Run the following commands to delete and reinstall logging:

        helm delete --tls --purge logging
        helm install --namespace kube-system --tls --name logging -f ./old-values.yaml -f ./new-values.yaml stable/ibm-icplogging
        

        Note: Ensure you deploy Helm chart version 2.0.0.

        Run the following command to install audit-logging:

        helm install audit-logging --name audit-logging --namespace kube-system --tls
        

        Notes: If you cannot find the logging-2.0.0 and the audit-logging-3.1.0 chart in the mgmt-repo, or you received an error during the logging or audit logging chart installation process, you can download the charts and install them by using the tar files.

        • You can run the following command to download all charts in the addon directory:

          docker run -e LICENSE=accept --net=host -v "$(pwd)":/data ibmcom/icp-inception-amd64:3.1.0-ee cp -r /addon /data/
          
        • You can use the downloaded ibm-icplogging-2.0.0.tgz and the audit-logging-3.1.0.tgz charts in the Helm installation command.

        • To install the logging chart, run the following command:

          helm install ibm-icplogging-2.0.0.tgz --tls --namespace kube-system --name logging -f ./old-values.yaml -f ./new-values.yaml
          

          Note that the old-values.yaml and the new-values.yaml are described in Step 5.

        • To install the audit-logging chart, run the following command:

          helm install audit-logging-3.1.0.tgz --tls --namespace kube-system --name audit-logging
          

          Note that the logging chart must be installed before the audit-logging chart.

    • Run the following command to ensure the istio chart and related CustomResourceDefinition are deleted completely before upgrading if you enabled istio for IBM Cloud Private 2.1.0.3:

       helm delete istio --purge --tls
       crds_exist="$(kubectl get customresourcedefinition | awk '{print $1}' | grep 'istio')"
       kubectl delete customresourcedefinition $crds_exist
      

      Note: Istio is not upgraded with the IBM Cloud Private from the 2.1.0.3 release to the 3.1.0 release, and must be reinstalled separately.

    • For IBM Cloud Private version 3.1.0 the management services that are disabled by default changed. For the new default list, see General settings. You can add the additional services that you want to be disabled to this new default list.

    • For high availability clusters, the vip_manager option is etcd by default in 3.1.0. If you change it to either keepalived or ucarp, the cluster experiences a brief outage for several seconds while the new virtual IP manager takes over the assignment of the address.

    • In the 3.1.0 release, a new format for the settings to configure the Docker runtime is used. If you customized the configuration options for the Docker runtime in {{site.data.keyword.previous, then you might be required to migrate your settings to the new format. For example:

      # Docker configuration option, more options see
      # https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file
      docker_config:
       log-opts:
         max-size: "100m"
         max-file: "10"
      
      # Docker environment setup
      docker_env:
       - HTTP_PROXY=http://1.2.3.4:3128
       - HTTPS_PROXY=http://1.2.3.4:3128
       - NO_PROXY=localhost,127.0.0.1,{{ cluster_CA_domain }}
      
      # Install/upgrade docker version
      docker_version: 18.03.1
      
      # Install Docker automatically or not
      install_docker: true
      
  6. Move the image files for your cluster to the /<new_installation_directory>/cluster/images folder.

    • For Linux® x86_64, run this command:

      mv /<path_to_images_file>/ibm-cloud-private-x86_64-3.1.0.tar.gz cluster/images/
      
    • For Linux® on Power® (ppc64le), run this command:

      sudo mv /<path_to_images_file>/ibm-cloud-private-ppc64le-3.1.0.tar.gz cluster/images/
      

      If you have IBM® Z worker nodes in your cluster, run this command:

      sudo mv /<path_to_images_file>/ibm-cloud-private-s390x-3.1.0.tar.gz cluster/images/
      

    In this command, path_to_images_file is the path to the images file.

  7. Deploy your environment by completing the following steps:

    1. Change to the cluster folder in your installation directory.

      cd /<new_installation_directory>/cluster
      
    2. Prepare the cluster for upgrade.

      • Prepare the cluster for upgrade for Linux® x86_64. Run the following command:
      sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception-amd64:3.1.0-ee upgrade-prepare
      
      • Prepare the cluster for upgrade for Linux® on Power® (ppc64le), run the following command:
      sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception-ppc64le:3.1.0-ee upgrade-prepare
      

      If the cluster preparation fails, review the error message and resolve any issues. Then, remove the cluster/.install.lock file, and run the upgrade-prepare command again.

    3. Upgrade Kubernetes.

      • Upgrade Kubernetes for Linux® x86_64. Run the following command:
      sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception-amd64:3.1.0-ee upgrade-k8s
      
      • Upgrade Kubernetes for Linux® on Power® (ppc64le). Run the following command:
      sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception-ppc64le:3.1.0-ee upgrade-k8s
      
      • If the Kubernetes upgrade fails with a different message, review the error message and resolve any issues. Then, rollback the Kubernetes services and run the upgrade Kubernetes services command again.
    4. Upgrade the charts by running the following command, substituting ARCH for amd64 or ppc64le according to your CPU architecture:

      • Upgrade the charts for Linux® x86_64. Run the following command:
      sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception-amd64:3.1.0-ee upgrade-chart
      
      • Upgrade the charts for Linux® on Power® (ppc64le)
      sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception-ppc64le:3.1.0-ee upgrade-chart
      
      • If the chart upgrade fails with a different message, review the error message and resolve any issues. Then, re-run the upgrade chart command again.
    5. If you want to enable istio for IBM Cloud Private 3.1.0, then you need to change the value to enabled, as it is displayed in the following management_services example:

      management_services:
      istio: enabled
      
    6. Deploy istio chart by running the following command, substituting ARCH for amd64 or ppc64le according to your CPU architecture:

      sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception-ARCH:3.1.0-ee addon
      
  8. Verify the status of your upgrade.

    • If the upgrade succeeded, the access information for your cluster is displayed.

      In the URL https://master_ip:8443, master_ip is the IP address of the master node for your IBM Cloud Private cluster.

      Note: If you created your cluster within a private network, use the public IP address of the master node to access the cluster.

    • If you encounter errors, see Troubleshooting.
  9. Security might be disabled due to selections at installation time or if you upgraded from an earlier release where security was not available. If security is not enabled, then you must redeploy the logging stack. For more information, see Enabling security for logging services.

  10. Clear your browser cache.

  11. If you have applications that use either GPU resources or a resource quota for GPU resources, you must manually update the application or resource quota with the new GPU resource name nvidia.com/gpu.

    • For applications that use GPU resources, follow the steps in Creating a deployment with attached GPU resources to run a sample GPU application. For your own GPU application, you need to update the application to use the new GPU resource name nvidia.com/gpu. For example, to update the deployment properties, you can use either the management console (see Modifying a deployment) or the kubectl CLI.
    • To update the resource quota for GPU resources, follow the steps in Setting resource quota to set a resource quota for your namespace. For upgrading, you need to update the resource quota to use the GPU resource name nvidia.com/gpu. For example, you can set the GPU quota to requests.nvidia.com/gpu: "2".
  12. Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.

  13. Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.

  14. Back up the boot node. Copy your /<new_installation_directory>/cluster directory to a secure location.

  15. If you use Cloud Automation Manager in your IBM Cloud Private cluster, you must also upgrade it. See Upgrading Cloud Automation Manager.