Upgrading IBM Cloud Pak for Data (Upgrading from Version 4.5 to Version 4.7)

Important: IBM Cloud Pak for Data Version 4.7 will reach end of support (EOS) on 31 July, 2025. For more information, see the Discontinuance of service announcement for IBM Cloud Pak for Data Version 4.X.

Upgrade to IBM Software Hub Version 5.1 before IBM Cloud Pak for Data Version 4.7 reaches end of support. For more information, see Upgrading IBM Software Hub in the IBM Software Hub Version 5.1 documentation.

After you upgrade the IBM Cloud Pak foundational services for the instance, you can upgrade the IBM Cloud Pak for Data control plane and services.

Upgrade phase
You are not here. Updating your client workstation
You are not here. Updating your cluster
You are not here. Collecting required information
You are not here. Preparing to run an upgrade from a private container registry
You are not here. Migrating to the private topology
You are not here. Preparing to upgrade an instance of Cloud Pak for Data
You are here icon. Upgrading an instance of Cloud Pak for Data
Who needs to complete this task?

Instance administrator An instance administrator can complete this task.

When do you need to complete this task?

Repeat as needed If you have multiple instances of Cloud Pak for Data on the cluster, this task for each instance that you plan to upgrade to Version 4.7.

Before you begin

Best practice: You can run the commands in this task exactly as written using the installation environment variables. Ensure that you added the new environment variables from Updating your environment variables script.

In addition, ensure that you source the environment variables before you run the commands in this task.

If you have any of the following services installed in this instance of Cloud Pak for Data, complete the specified steps to prepare the instance for upgrade:
Analytics Engine powered by Apache Spark

The following steps apply only to environments that use Red Hat® OpenShift® Data Foundation storage.

  1. Run the following command to determine whether the blockStorageClass parameter is set to ocs-storagecluster-ceph-rbd in the ZenService custom resource:
    oc get ZenService lite-cr -n ${PROJECT_CPD_INST_OPERANDS} -o yaml | grep blockStorageClass
    • If the command returns ocs-storagecluster-ceph-rbd, proceed to the next step.
    • If the command returns a value other than ocs-storagecluster-ceph-rbd, you can skip the remaining steps.
  2. Run the following command to determine whether the blockStorageClass parameter is set to ocs-storagecluster-ceph-rbd in the ae custom resource:
    oc get ae analyticsengine-sample -n ${PROJECT_CPD_INST_OPERANDS} -o yaml | grep blockStorageClass
    • If the command returns ocs-storagecluster-ceph-rbd, no changes are needed.
    • If the command returns a value other than ocs-storagecluster-ceph-rbd, proceed to the next step to update the Analytics Engine powered by Apache Spark custom resource.
    • If the command returns an empty response, proceed to the next step to update the Analytics Engine powered by Apache Spark custom resource.
  3. Update the blockStorageClass property in the Analytics Engine powered by Apache Spark custom resource:
    oc patch ae analyticsengine-sample \
    --namespace=${PROJECT_CPD_INST_OPERANDS} \
    --type merge \
    --patch '{"spec": {"blockStorageClass": "ocs-storagecluster-ceph-rbd"}}'

Watson Machine Learning

If Watson Machine Learning is installed, run the following commands to force the Watson Machine Learning operator to run a reconcile loop.

  1. Put the Watson Machine Learning service in maintenance mode:
    oc patch wmlbase wml-cr \
    --namespace=${PROJECT_CPD_INST_OPERANDS} \
    --type merge \
    --patch '{"spec": {"ignoreForMaintenance": true}}'
  2. Confirm that the service is in maintenance mode:
    oc get wmlbase \
    --namespace=${PROJECT_CPD_INST_OPERANDS}
    The command should return output with the following format:
    NAME     VERSION   BUILD     STATUS           RECONCILED   AGE
    wml-cr   4.5.x     4.5.x      In-Maintenance  4.5.x        4d21h
  3. Take the Watson Machine Learning service out of maintenance mode:
    oc patch wmlbase wml-cr \
    --namespace=${PROJECT_CPD_INST_OPERANDS} \
    --type merge \
    --patch '{"spec": {"ignoreForMaintenance": false}}'

Watson Machine Learning Accelerator

Starting in Cloud Pak for Data Version 4.7, Watson Machine Learning Accelerator notebooks are not supported.

If you use Watson Machine Learning Accelerator notebooks, you must export your notebooks before you upgrade Watson Machine Learning Accelerator. For more information, see Working with Watson Machine Learning Accelerator notebooks in IBM Cloud Pak for Data in the Watson Machine Learning Accelerator documentation.


Watson Knowledge Studio

If Watson Knowledge Studio is installed, prepare the service for upgrade:

  1. Update the Watson Knowledge Studio custom resource to include information about the required block and file storage classes:
    oc patch KnowledgeStudio wks \
    --namespace=${PROJECT_CPD_INST_OPERANDS} \
    --type merge \
    --patch "{\"spec\": {\"global\": {\"blockStorageClass\": \"${STG_CLASS_BLOCK}\", \"fileStorageClass\": \"${STG_CLASS_FILE}\"}}}"
  2. Scale down the MinIO pods for Watson Knowledge Studio:
    oc patch MinioCluster wks-minio \
    --namespace=${PROJECT_CPD_INST_OPERANDS} \
    --type merge \
    --patch '{"spec":{"replicasForDev":0,"replicasForProd":0}}'
    oc patch KnowledgeStudio wks \
    --namespace=${PROJECT_CPD_INST_OPERANDS} \
    --type merge \
    --patch '{"spec":{"minio":{"replicas":0}}}'

About this task

You can choose whether you want to:
  • Upgrade the Cloud Pak for Data control plane before you upgrade the services in the instance
  • Upgrade the Cloud Pak for Data control plane and the services at the same time.
Remember: All of the software in the instance must be installed at the same version.

When you run the cpd-cli manage apply-olm command, all of the operators in the instance are upgraded to the same version.

Procedure

  1. Run the cpd-cli manage login-to-ocp command to log in to the cluster as a user with sufficient permissions to complete this task. For example:
    cpd-cli manage login-to-ocp \
    --username=${OCP_USERNAME} \
    --password=${OCP_PASSWORD} \
    --server=${OCP_URL}
    Tip: The login-to-ocp command takes the same input as the oc login command. Run oc login --help for details.
  2. Review the license terms for Cloud Pak for Data.
    The Cloud Pak for Data licenses are available online. Run the appropriate command to get the URL for your license:
    Enterprise Edition
    cpd-cli manage get-license \
    --release=${VERSION} \
    --license-type=EE

    Standard Edition
    cpd-cli manage get-license \
    --release=${VERSION} \
    --license-type=SE

  3. Upgrade the operators in the operators project for the instance.
    Tip: Before you run this command against your cluster, you can preview the oc commands that this command will issue on your behalf by running the command with the --preview=true option.

    The oc commands are saved to the preview.sh file in the work directory.

    The following command upgrades all of the operators in the operators project.

    cpd-cli manage apply-olm \
    --release=${VERSION} \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --upgrade=true
    Wait for the cpd-cli to return the following message before proceeding to the next step:
    [SUCCESS]... The apply-olm command ran successfully.
  4. Confirm that the operator pods are Running or Copmleted:
    oc get pods --namespace=${PROJECT_CPD_INST_OPERATORS}
  5. Upgrade the operands in the operands project for the instance.
    Tip: Before you run this command against your cluster, you can preview the oc commands that this command will issue on your behalf by running the command with the --preview=true option.

    The oc commands are saved to the preview.sh file in the work directory.

    The command that you run depends on the storage on your cluster:


    Red Hat OpenShift Data Foundation storage

    Create the custom resources for the specified components.

    Cloud Pak for Data control plane and services

    By default, the apply-cr upgrades up to 4 components at the same time. You can adjust this setting by specifying the --parallel_num option.

    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=${COMPONENTS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true
    Cloud Pak for Data control plane only
    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=cpd_platform \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true

    IBM Storage Fusion Data Foundation storage

    Create the custom resources for the specified components.

    Cloud Pak for Data control plane and services

    By default, the apply-cr upgrades up to 4 components at the same time. You can adjust this setting by specifying the --parallel_num option.

    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=${COMPONENTS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true
    Cloud Pak for Data control plane only
    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=cpd_platform \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true

    IBM Storage Fusion storage

    Create the custom resources for the specified components.

    When you use IBM Storage Fusion storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically ibm-spectrum-scale-sc.

    Cloud Pak for Data control plane and services

    By default, the apply-cr upgrades up to 4 components at the same time. You can adjust this setting by specifying the --parallel_num option.

    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=${COMPONENTS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true
    Cloud Pak for Data control plane only
    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=cpd_platform \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true

    IBM Storage Scale Container Native storage

    Create the custom resources for the specified components.

    When you use IBM Storage Scale Container Native storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically ibm-spectrum-scale-sc.

    Cloud Pak for Data control plane and services

    By default, the apply-cr upgrades up to 4 components at the same time. You can adjust this setting by specifying the --parallel_num option.

    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=${COMPONENTS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true
    Cloud Pak for Data control plane only
    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=cpd_platform \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true

    Portworx storage

    Create the custom resources for the specified components.

    Cloud Pak for Data control plane and services

    By default, the apply-cr upgrades up to 4 components at the same time. You can adjust this setting by specifying the --parallel_num option.

    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=${COMPONENTS} \
    --storage_vendor=portworx \
    --license_acceptance=true \
    --upgrade=true
    Cloud Pak for Data control plane only
    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=cpd_platform \
    --storage_vendor=portworx \
    --license_acceptance=true \
    --upgrade=true

    NFS storage

    When you use IBM Storage Scale Container Native storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically managed-nfs-storage.

    Create the custom resources for the specified components.

    Cloud Pak for Data control plane and services

    By default, the apply-cr upgrades up to 4 components at the same time. You can adjust this setting by specifying the --parallel_num option.

    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=${COMPONENTS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true
    Cloud Pak for Data control plane only
    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=cpd_platform \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true

    AWS EFS storage only

    Create the custom resources for the specified components.

    When you use only EFS storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically efs-nfs-client.

    Cloud Pak for Data control plane and services

    By default, the apply-cr upgrades up to 4 components at the same time. You can adjust this setting by specifying the --parallel_num option.

    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=${COMPONENTS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true
    Cloud Pak for Data control plane only
    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=cpd_platform \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true

    AWS EFS and EBS storage

    Create the custom resources for the specified components.

    Cloud Pak for Data control plane and services

    By default, the apply-cr upgrades up to 4 components at the same time. You can adjust this setting by specifying the --parallel_num option.

    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=${COMPONENTS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true
    Cloud Pak for Data control plane only
    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=cpd_platform \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true \
    --upgrade=true

    Wait for the cpd-cli to return the following message before proceeding to the next step:
    [SUCCESS]... The apply-cr command ran successfully.
  6. Confirm that the status of the operands is Completed:
    cpd-cli manage get-cr-status \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
  7. Clean up any failed or pending operand requests in the operands project:
    1. Get the list of operand requests with the format <component>-requests-<component>:
      oc get operandrequest --namespace=${PROJECT_CPD_INST_OPERANDS} | grep requests

      If the preceding command returns any operand requests in the Failed or Pending state, proceed to the next step.

    2. Delete each operand request in the Failed or Pending state:

      Replace <operand-request-name> with the name of operand request to delete.

      oc delete operandrequest <operand-request-name> \
      --namespace=${PROJECT_CPD_INST_OPERANDS}
  8. Ask the cluster administrator to remove the instance project from the sharewith list in the ibm-cpp-config SecretShare in the shared IBM Cloud Pak foundational services operators project:
    1. Confirm the name of the instance project:
      echo $PROJECT_CPD_INST_OPERANDS
    2. Check whether the instance project is listed in the sharewith list in the ibm-cpp-config SecretShare:
      oc get secretshare ibm-cpp-config \
      --namespace=${PROJECT_CPFS_OPS} \
      -o yaml

      The command returns output with the following format:

      apiVersion: ibmcpcs.ibm.com/v1
      kind: SecretShare
      metadata:
        name: ibm-cpp-config
        namespace: ibm-common-services
      spec:
        configmapshares:
        - configmapname: ibm-cpp-config
          sharewith:
          - namespace: cpd-instance-x
          - namespace: ibm-common-services
          - namespace: cpd-operators
          - namespace: cpd-instance-y

      If the instance project is in the list, proceed to the next step. If the instance is not in the list, no further action is required.

    3. Open the ibm-cpp-config SecretShare in the editor:
      oc edit secretshare ibm-cpp-config \
      --namespace=${PROJECT_CPFS_OPS}
    4. Remove the entry for the instance project from the sharewith list and save your changes to the SecretShare.

What to do next

Your next steps depend on which components you upgraded:
Cloud Pak for Data control plane and services
If you upgraded the Cloud Pak for Data control plane and the services in the instance, see Setting up services after install or upgrade.
Cloud Pak for Data control plane only
If you upgraded only the Cloud Pak for Data control plane, you must upgrade the services in the instance. For more information, see Services.