Upgrading to IBM Software Hub (Upgrading from Version 4.8 to Version 5.2)

To upgrade an instance of IBM Cloud Pak® for Data Version 4.8 to IBM Software Hub Version 5.2, you must upgrade the required operators and custom resources for the instance. After you upgrade the required operators for the instance, you must upgrade the operators for the services that are installed on the instance.

Upgrade phase
  • You are not here. Updating your cluster
  • You are not here. Updating your client workstation
  • You are not here. Collecting required information
  • You are not here. Preparing to run an upgrade in a restricted network
  • You are not here. Preparing to run an upgrade from a private container registry
  • You are not here. Installing and upgrading prerequisite software
  • You are not here. Updating the shared cluster components
  • You are not here. Preparing to upgrade an instance
  • You are here icon. Upgrading an instance
Who needs to complete this task?

Instance administrator An instance administrator can complete this task.

When do you need to complete this task?

Repeat as needed If you have multiple instances of IBM Cloud Pak for Data Version 5.0 on the cluster, complete this task for each instance that you want to upgrade to IBM Software Hub Version 5.2.

Before you begin

Best practice: You can run the commands in this task exactly as written using the installation environment variables. Ensure that you added the new environment variables from Updating your environment variables script.

In addition, ensure that you source the environment variables before you run the commands in this task.

Before you upgrade to IBM Software Hub, check whether the following common core services pods are running in this instance of IBM Cloud Pak for Data:

  1. Check whether the global search pods are running:
    oc get pods --namespace=${PROJECT_CPD_INST_OPERANDS} | grep elasticsea-0ac3
  2. Check whether the catalog-api pods are running:
    oc get pods --namespace=${PROJECT_CPD_INST_OPERANDS} | grep catalog-api
    • If the command returns an empty response, you are ready to upgrade IBM Software Hub.
    • If the command returns a list of pods, review the following guidance to determine how long the catalog-api service will be down during upgrade.

    When you upgrade the common core services to IBM Software Hub Version 5.2, the underlying storage for the catalog-api service is migrated to PostgreSQL.

    During the final stages of the migration, the catalog-api service is offline, and services that are dependent on the service are not available. The duration of the migration depends on the number of assets and relationships that are stored in the instance. The duration of the outage depends on the number of databases (projects, catalogs, and spaces) in the instance. In a typical upgrade scenario, the outage should be significantly shorter than the overall migration.

    To determine how many databases will be migrated:

    1. Set the INSTANCE_URL environment variable to the URL of IBM Software Hub:
      export INSTANCE_URL=<URL>
      Tip: To get the URL of the web client, run the following command:
      cpd-cli manage get-cpd-instance-details \
      --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
    2. Get the credentials for the wdp-service:
      TOKEN=$(oc get -n ${PROJECT_CPD_INST_OPERANDS} secrets wdp-service-id -o yaml | grep service-id-credentials | cut -d':' -f2- | sed -e 's/ //g' | base64 -d)
    3. Get the number of catalogs in the instance:
      curl -sk -X GET "https://${INSTANCE_URL}/v2/catalogs?limit=10001&skip=0&include=catalogs&bss_account_id=999" -H 'accept: application/json' -H "Authorization: Basic ${TOKEN}" | jq -r '.catalogs | length'
    4. Get the number of projects in the instance:
      curl -sk -X GET "https://${INSTANCE_URL}/v2/catalogs?limit=10001&skip=0&include=projects&bss_account_id=999" -H 'accept: application/json' -H "Authorization: Basic ${TOKEN}" | jq -r '.catalogs | length'
    5. Get the number of spaces in the instance:
      curl -sk -X GET "https://${INSTANCE_URL}/v2/catalogs?limit=10001&skip=0&include=spaces&bss_account_id=999" -H 'accept: application/json' -H "Authorization: Basic ${TOKEN}" | jq -r '.catalogs | length'
    6. Add up the number of catalogs, projects, and spaces returned by the previous commands. Then, use the following table to determine approximately how long the service will be offline during the migration:
      Databases Downtime for migration (approximate)
      Up to 1,000 databases 6 minutes
      1,001 - 10,000 databases 20 minutes
      10,001 - 70,000 databases 60 minutes
  3. Save the following script on the client workstation as a file named precheck_migration.sh:
    #!/bin/bash
    
    # Default ranges for couchdb size
    SMALL=50
    MEDIUM=100
    LARGE=200
    
    echo "Performing pre-migration checks"
    
    patch_for_small()
    {
      echo -e "Run the following command to increase the CPU and memory:\n"
              cat << EOF
    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec": {
      "catalog_api_postgres_migration_threads": 4,
      "catalog_api_migration_job_resources": { 
        "requests": {"cpu": "2", "ephemeral-storage": "10Mi", "memory": "2Gi"},
        "limits": {"cpu": "6", "ephemeral-storage": "1Gi", "memory": "6Gi"}}
    }}'
    EOF
        echo
        echo "The system is ready for migration. Upgrade your cluster as usual."
    }
    
    patch_for_medium()
    {
        echo -e "Run the following command to increase the CPU and memory:\n"
              cat << EOF
    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec": {
      "catalog_api_postgres_migration_threads": 6,
      "catalog_api_migration_job_resources": { 
        "requests": {"cpu": "3", "ephemeral-storage": "10Mi", "memory": "4Gi"},
        "limits": {"cpu": "8", "ephemeral-storage": "4Gi", "memory": "8Gi"}}
    }}'
    EOF
        echo
        echo "The system is ready for migration. Upgrade your cluster as usual."
    }
    
    patch_for_large()
    {
        echo -e "Run the following command to increase the CPU and memory:\n"
              cat << EOF
    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec": {
      "catalog_api_postgres_migration_threads": 8,
      "catalog_api_migration_job_resources": { 
        "requests": {"cpu": "6", "ephemeral-storage": "10Mi", "memory": "6Gi"},
        "limits": {"cpu": "10", "ephemeral-storage": "6Gi", "memory": "10Gi"}}
    }}'
    EOF
        echo
        echo "Before you can start the upgrade, you must prepare the system for migration."
    }
    
    check_resources(){
            scale_config=$1
            #pvc_size=$(oc get pvc -n ${PROJECT_CPD_INST_OPERANDS} database-storage-wdp-couchdb-0 --no-headers | awk '{print $4}')
            #size=$(awk '{print substr($0, 1, length($0)-2)}' <<< "$pvc_size")
            size=20
    
            if [[ $scale_config == "small" ]];then
              if [[ "$size" -le "$SMALL" ]];then
                echo "The system is ready for migration. Upgrade your cluster as usual."
              elif [ "$size" -ge "$SMALL" ] && [ "$size" -le "$MEDIUM" ];then
                patch_for_medium
              elif [ "$size" -ge "$MEDIUM" ] && [ "$size" -le "$LARGE" ];then
                patch_for_large
              else
                patch_for_large
              fi
            elif [[ $scale_config == "medium" ]];then
              if [[ "$size" -le "$SMALL" ]];then
                patch_for_small
              elif [ "$size" -ge "$SMALL" ] && [ "$size" -le "$MEDIUM" ];then
                echo "The system is ready for migration. Upgrade your cluster as usual."
              elif [ "$size" -ge "$MEDIUM" ] && [ "$size" -le "$LARGE" ];then
                patch_for_large
              else
                patch_for_large
              fi
            elif [[ $scale_config == "large" ]];then
              if [[ "$size" -le "$SMALL" ]];then
                patch_for_small
              elif [ "$size" -ge "$SMALL" ] && [ "$size" -le "$MEDIUM" ];then
                patch_for_medium
              elif [ "$size" -ge "$MEDIUM" ] && [ "$size" -le "$LARGE" ];then
                echo "The system is ready for migration. Upgrade your cluster as usual."
              else
                patch_for_large
              fi
            fi
    }
    
    check_upgrade_case(){     
            echo -e "Checking if automatic upgrade or semi-automatic upgrade is needed"
            #scale_config=$(oc get ccs -n ${PROJECT_CPD_INST_OPERANDS} ccs-cr -o json | jq -r '.spec.scaleConfig')
            scale_config=large
    
            # Default case, scale config is set to small
            if [[ -z "${scale_config}" ]];then
              scale_config=small
            fi
    
            check_resources $scale_config
    }
    
    check_upgrade_case
  4. Run the precheck_migration.sh to determine whether you can run an automatic migration of the common core services or whether you need to configure common core services to run a semi-automatic migration:
    ./precheck_migration.sh 
    Take the appropriate action based on the message returned by the script:
    Messages returned by the script Migration type What to do next
    The system is ready for migration. Upgrade your cluster as usual. Automatic You are ready to upgrade IBM Software Hub.
    Important: After you upgrade the services in your environment, ensure that you complete Completing the catalog-api service migration
    Run the following command to increase the CPU and memory. Automatic
    1. Run the patch command returned by the script.
    2. Upgrade IBM Software Hub.
    Important: After you upgrade the services in your environment, ensure that you complete Completing the catalog-api service migration
    The script returns both of the following messages:
    • Run the following command to increase the CPU and memory.
    • Before you can start the upgrade, you must prepare the system for migration.
    Semi-automatic
    1. Run the patch command returned by the script.
    2. Run the following command to enable semi-automatic migration:
      oc patch ccs ccs-cr \
      -n ${PROJECT_CPD_INST_OPERANDS} \
      --type merge \
      --patch '{"spec": {"use_semi_auto_catalog_api_migration": true}}'
    3. Upgrade IBM Software Hub.
    Important: After you upgrade the services in your environment, ensure that you complete Completing the catalog-api service migration

About this task

Use the setup-instance command to upgrade the required operators and custom resources for an instance of IBM Software Hub.

Note: The setup-instance command in this topic includes the --run_storage_tests option. It is strongly recommended that you run the command with the --run_storage_tests option to ensure that the storage in your environment meets the minimum requirements for performance.

If your storage does not meet the minimum requirements, you can remove the --run_storage_tests option to continue the upgrade. However, your environment is likely to encounter problems because of issues with your storage.

Use the apply-olm command to upgrade the operators for all of the services that are installed on the instance.

Procedure

  1. Log the cpd-cli in to the Red Hat® OpenShift® Container Platform cluster:
    ${CPDM_OC_LOGIN}
    Remember: CPDM_OC_LOGIN is an alias for the cpd-cli manage login-to-ocp command.
  2. Review the license terms for the software that is installed on this instance of IBM Software Hub.
    The licenses are available online. Run the appropriate commands based on the license that you purchased:
    IBM Cloud Pak for Data Enterprise Edition
    cpd-cli manage get-license \
    --release=${VERSION} \
    --license-type=EE

    IBM Cloud Pak for Data Standard Edition
    cpd-cli manage get-license \
    --release=${VERSION} \
    --license-type=SE

    IBM Data Gate for watsonx
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=datagate \
    --license-type=DGWXD

    IBM Data Product Hub Cartridge
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=dataproduct \
    --license-type=DPH

    Data Replication

    Run the appropriate command based on the license that you purchased:

    IBM Data Replication Cartridge
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IDRC
    IBM InfoSphere® Data Replication Cartridge
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IIDRC
    IBM Data Replication Modernization
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IDRM
    IBM InfoSphere Data Replication Modernization
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IIDRM
    IBM Data Replication for Db2® z/OS® Cartridge
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IDRZOS
    IBM InfoSphere Data Replication for watsonx.data™ Cartridge
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IIDRWXTO
    IBM InfoSphere Data Replication Cartridge Add-on for IBM watsonx.data
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=replication \
    --license-type=IIDRWXAO

    Db2

    Run the appropriate command based on the license that you purchased:

    IBM Db2 Standard Edition Cartridge for IBM Cloud Pak for Data
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=db2oltp \
    --license-type=DB2SE
    IBM Db2 Advanced Edition Cartridge for IBM Cloud Pak for Data
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=db2oltp \
    --license-type=DB2AE

    IBM Knowledge Catalog Premium
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=ikc_premium \
    --license-type=IKCP

    IBM Knowledge Catalog Standard
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=ikc_standard \
    --license-type=IKCS

    Synthetic Data Generator
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=syntheticdata \
    --license-type=WXAI

    IBM watsonx.ai
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=watsonx_ai \
    --license-type=WXAI

    IBM watsonx.data
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=watsonx_data \
    --license-type=WXD

    IBM watsonx.data Premium Edition
    cpd-cli manage get-license \
    --release=${VERSION} \
    --component=watsonx_data_premium \
    --license-type=WXD

  3. Upgrade the required operators and custom resources for the instance.
    Tip: Before you run this command against your cluster, you can preview the oc commands that this command will issue on your behalf by running the command with the --preview=true option.

    The oc commands are saved to the preview.sh file in the work directory.

    The command that you run depends on the following factors:
    • Whether the instance includes tethered projects.
    • The type of storage that you use.

    Instances without tethered projects
    Portworx storage
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --storage_vendor=portworx \
    --run_storage_tests=true
    All other storage
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    Instances with tethered projects
    Portworx storage
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --storage_vendor=portworx \
    --run_storage_tests=true
    All other storage
    cpd-cli manage setup-instance \
    --release=${VERSION} \
    --license_acceptance=true \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --tethered_ns=${PROJECT_CPD_INSTANCE_TETHERED_LIST} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --run_storage_tests=true

    Wait for the cpd-cli to return the following message before proceeding to the next step:

    [SUCCESS] ... The setup-instance command ran successfully.
    If the setup-instance fails, see The setup-instance command fails when operator subscriptions are unbound.
  4. Upgrade the operators for the services that are installed on the instance.
    Tip: Before you run this command against your cluster, you can preview the oc commands that this command will issue on your behalf by running the command with the --preview=true option.

    The oc commands are saved to the preview.sh file in the work directory.

    The following command upgrades all of the operators in the operators project.

    cpd-cli manage apply-olm \
    --release=${VERSION} \
    --cpd_operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --upgrade=true
    Wait for the cpd-cli to return the following message before proceeding to the next step:
    [SUCCESS]... The apply-olm command ran successfully.
    If the apply-olm command fails, see Troubleshooting the apply-olm command during installation or upgrade.
  5. Confirm that the operator pods are Running or Completed:
    oc get pods --namespace=${PROJECT_CPD_INST_OPERATORS}
  6. Optional: If you want to run a batch upgrade of the services that are installed on the instance, run the apply-cr command.

    By default, the apply-cr upgrades up to 4 components at the same time. You can adjust this setting by specifying the --parallel_num option.

    Tip: Before you run this command against your cluster, you can preview the oc commands that this command will issue on your behalf by running the command with the --preview=true option.

    The oc commands are saved to the preview.sh file in the work directory.

    cpd-cli manage apply-cr \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --components=${COMPONENTS} \
    --license_acceptance=true \
    --upgrade=true
  7. Confirm that the status of the operands is Completed:
    cpd-cli manage get-cr-status \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
  8. If you have any custom RSI patches that patch zen pods or IBM Cloud Pak foundational services pods, reapply the patches:
    1. Run the following command to get a list of the RSI patches in the operands project:
      cpd-cli manage get-rsi-patch-info \
      --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
      --all
    2. If there are patches that apply to zen or IBM Cloud Pak foundational services pods, run the following command to apply your custom patches:
      cpd-cli manage apply-rsi-patches \
      --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
  9. Check the health of the resources in the operators project:
    cpd-cli health operators \
    --operator_ns=${PROJECT_CPD_INST_OPERATORS} \
    --control_plane_ns=${PROJECT_CPD_INST_OPERANDS}
    Confirm that the health check report returns the expected results:
    Test What the test checks Expected result
    Pod Healthcheck For pods in the operators project, the status of each required pod is Running. [SUCCESS]
    Pod Usage Healthcheck For pods in the operators project, the resource use for each pod is within the CPU and memory limits. [SUCCESS]
    Cluster Service Versions Healthcheck For cluster service versions (CSVs) in the operators project, the phase of each CSV is Succeeded. [SUCCESS]
    Catalog Source Healthcheck For catalog sources in the operators project, the last observed state of each catalog source is Ready. [SUCCESS]
    Install Plan Healthcheck For operators in the operators project, the install plan approval for each operator is Automatic. [SUCCESS]
    Subscriptions Healthchec For subscriptions in the operators project, there is an installed CSV for each subscription. [SUCCESS]
    Persistent Volume Claim Healthcheck For persistent volume claims (PVCs) in the operators project, each PVC is bound.
    Note: There should not be any PVC in the operators project, so the test should be skipped.
    [SKIP...]
    Deployment Healthcheck For deployments in the operators project, each deployment has the desired number of replicas. [SUCCESS]
    Namespace Scopes Healthcheck For the NamespaceScope operator in the operators project, the projects that are specified in the members list exist. [SUCCESS]
    Stateful Set Healthcheck For stateful sets in the operators project, the stateful sets have the desired number of replicas.
    Note: There should not be any stateful sets in the operators project, so the test should be skipped.
    [SKIP...]
    Common Services Healthcheck For the common-service commonservice custom resource in the operators project, the phase of the custom resource is Succeeded. [SUCCESS]
    Custom Resource Healthcheck For any other custom resources in the operators project, the phase of each custom resource is Succeeded.
    Note: There should not be any other custom resources in the operators project, so the test should be skipped.
    [SKIP...]
    Operand Requests Healthcheck For operand requests in the operators project, the phase of each operand request is Running, [SUCCESS]
  10. Check the health of the resources in the operands project:
    cpd-cli health operands \
    --control_plane_ns=${PROJECT_CPD_INST_OPERANDS}
    Confirm that the health check report returns the expected results:
    Test What the test checks Expected result
    Pod Healthcheck For pods in the operands project, the status of each pod is Running. [SUCCESS]
    Pod Usage Healthcheck For pods in the operands project, the resource use for each pod is within the CPU and memory limits. [SUCCESS]
    EDB Cluster Healthcheck For EDB Postgres clusters in the operands project, the status of each cluster is Cluster in healthy state. [SUCCESS]
    Persistent Volume Claim Healthcheck For persistent volume claims (PVCs) in the operands project, each PVC is bound. [SUCCESS]
    Deployment Healthcheck For deployments in the operands project, each deployment has the desired number of replicas. [SUCCESS]
    Stateful Set Healthcheck For stateful sets in the operands project, the stateful sets have the desired number of replicas. [SUCCESS]
    Common Services Healthcheck For the common-service commonservice custom resource in the operands project, the phase of the custom resource is Succeeded. [SUCCESS]
    Operand Requests Healthcheck For operand requests in the operands project, the phase of each operand request is Running. [SUCCESS]
    Monitor Events Healthcheck The platform monitors are not generating any Critical events. [SUCCESS]
    Custom Resource Healthcheck For custom resources in the operands project, the phase of each custom resource is Succeeded. [SUCCESS]
    Platform Healthcheck That the pods for required platform microservices are Running. [SUCCESS]

What to do next

If you use the cpdbr service to back up Cloud Pak for Data, see Updating the cpdbr service (Upgrading from Version 4.8 to Version 5.2).

If you don't use the cpdbr service to back up Cloud Pak for Data, your next steps depend on which components you upgraded:

IBM Software Hub control plane and services
If you upgraded the IBM Software Hub control plane and the services in the instance:
  1. Review Installing the configuration admission controller webhook (Upgrading from Version 4.8 to Version 5.2)
  2. If you upgraded services with a dependency on the common core services, complete the following tasks:
    1. Completing the catalog-api service migration
    2. Applying the Version 5.2.0 - Day 0 patch
  3. Complete Setting up services after install or upgrade
IBM Software Hub control plane only
If you upgraded only the IBM Software Hub control plane:
  1. Review Installing the configuration admission controller webhook (Upgrading from Version 4.8 to Version 5.2)
  2. Upgrade the services in the instance. For more information, see Services.
  3. If you upgraded services with a dependency on the common core services, complete the following tasks:
    1. Completing the catalog-api service migration
    2. Applying the Version 5.2.0 - Day 0 patch