Upgrading Db2 Big SQL from Version 4.0 to Version 4.6

A project administrator can upgrade Db2 Big SQL from Cloud Pak for Data Version 4.0 to Version 4.6.

Important: To complete this task, you must be running Db2 Big SQL Version 7.2.2 or later. (Version 7.2.2 was released with Cloud Pak for Data Version 4.0 Refresh 2.)
Supported upgrade paths
If you are running Db2 Big SQL Version 7.2.2 or later, you can upgrade to Versions 4.6.0 - 4.6.2.
Unsupported upgrade paths
You cannot upgrade from Version 4.0 to Version 4.6.3 or later. You must upgrade to 4.6.2 before you upgrade to 4.6.3 or later.
What permissions do you need to complete this task?
The permissions that you need depend on which tasks you must complete:
  • To update the Db2 Big SQL operators, you must have the appropriate permissions to create operators and you must be an administrator of the project where the Cloud Pak for Data operators are installed. This project is identified by the ${PROJECT_CPD_OPS} environment variable.
  • To upgrade Db2 Big SQL, you must be an administrator of the project where Db2 Big SQL is installed. This project is identified by the ${PROJECT_CPD_INSTANCE} environment variable.
When do you need to complete this task?
If you didn't upgrade Db2 Big SQL when you upgraded the platform, you can complete this task to upgrade your existing Db2 Big SQL installation.

If you want to upgrade all of the Cloud Pak for Data components at the same time, follow the process in Upgrading the platform and services instead.

Important: All of the Cloud Pak for Data components in a deployment must be installed at the same release.

Information you need to complete this task

Review the following information before you upgrade Db2 Big SQL:

Environment variables
The commands in this task use environment variables so that you can run the commands exactly as written.
  • If you don't have the script that defines the environment variables, see Setting up installation environment variables.
  • To use the environment variables from the script, you must source the environment variables before you run the commands in this task, for example:
    source ./cpd_vars.sh
Installation location
Db2 Big SQL is installed in the same project (namespace) as the Cloud Pak for Data control plane. This project is identified by the ${PROJECT_CPD_INSTANCE} environment variable.
Storage requirements
You don't need to specify storage when you upgrade Db2 Big SQL.

Before you begin

This task assumes that the following prerequisites are met:

Prerequisite Where to find more information
The cluster meets the minimum requirements for Db2 Big SQL. If this task is not complete, see System requirements.
The workstation from which you will run the upgrade is set up as a client workstation and includes the following command-line interfaces:
  • Cloud Pak for Data CLI: cpd-cli
  • OpenShift® CLI: oc
If this task is not complete, see Setting up a client workstation.
The Cloud Pak for Data control plane is upgraded. If this task is not complete, see Upgrading the platform and services.
For environments that use a private container registry, such as air-gapped environments, the Db2 Big SQL software images are mirrored to the private container registry. If this task is not complete, see Mirroring images to a private container registry.

Procedure

Complete the following tasks to upgrade Db2 Big SQL:

  1. Logging in to the cluster
  2. Updating the operator
  3. Upgrading the service
  4. Validating the upgrade
  5. Upgrading existing service instances
  6. Verifying the instance upgrade
  7. What to do next

Logging in to the cluster

To run cpd-cli manage commands, you must log in to the cluster.

To log in to the cluster:

  1. Run the cpd-cli manage login-to-ocp command to log in to the cluster as a user with sufficient permissions to complete this task. For example:
    cpd-cli manage login-to-ocp \
    --username=${OCP_USERNAME} \
    --password=${OCP_PASSWORD} \
    --server=${OCP_URL}
    Tip: The login-to-ocp command takes the same input as the oc login command. Run oc login --help for details.

Updating the operator

The Db2 Big SQL operator simplifies the process of managing the Db2 Big SQL service on Red Hat® OpenShift Container Platform.

To upgrade Db2 Big SQL, ensure that all of the Operator Lifecycle Manager (OLM) objects in the ${PROJECT_CPD_OPS} project, such as the catalog sources and subscriptions, are upgraded to the appropriate release. All of the OLM objects must be at the same release.

Who needs to complete this task?
You must be a cluster administrator (or a user with the appropriate permissions to install operators) to create the OLM objects.
When do you need to complete this task?
Complete this task only if the OLM artifacts have not been updated for the current release using the cpd-cli manage apply-olm command with the --upgrade=true option.

It is not necessary to run this command multiple times for each service that you plan to upgrade. If you complete this task and the OLM artifacts already exist on the cluster, the cpd-cli will recreate the OLM objects for all of the existing components in the ${PROJECT_CPD_OPS} project.

To update the operator:

  1. Update the OLM objects:
    cpd-cli manage apply-olm \
    --release=${VERSION} \
    --cpd_operator_ns=${PROJECT_CPD_OPS} \
    --upgrade=true
    • If the command succeeds, it returns [SUCCESS]... The apply-olm command ran successfully.
    • If the command fails, it returns [ERROR] and includes information about the cause of the failure.

What to do next: Upgrade the Db2 Big SQL service.

Upgrading the service

After the Db2 Big SQL operator is updated, you can upgrade Db2 Big SQL.

Who needs to complete this task?
You must be an administrator of the project where Db2 Big SQL is installed.
When do you need to complete this task?
Complete this task for each instance of Db2 Big SQL that is associated with an instance of Cloud Pak for Data Version 4.6.

To upgrade the service:

  1. Update the custom resource for Db2 Big SQL.
    cpd-cli manage apply-cr \
    --components=bigsql \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
    --license_acceptance=true \
    --upgrade=true

Validating the upgrade

Db2 Big SQL is upgraded when the apply-cr command returns [SUCCESS]... The apply-cr command ran successfully.

However, you can optionally run the cpd-cli manage get-cr-status command if you want to confirm that the custom resource status is Completed:

cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
--components=bigsql

Upgrading existing service instances

After you upgrade Db2 Big SQL, you must upgrade any service instances that are associated with Db2 Big SQL.

To upgrade the service instances:

  1. Identify the Db2 Big SQL instances and check that their state is Ready:
    oc get bigsql -l app.kubernetes.io/name=db2-bigsql
    NAME                  DB2UCLUSTER           STATE   AGE
    bigsql-<instance_id>  bigsql-<instance_id>  Ready   179m
  2. Upgrade each instance by updating its version field.
    1. Upgrade to Db2 Big SQL 7.4.4 by running the following command:
      oc patch bigsql bigsql-<instance_id> --patch '{"spec": {"version": "7.4.4"}}' --type=merge
    2. Verify the instance upgrade.
  3. Update the version in the configmap for each upgraded instance.
    INSTANCE_ID=<instance_id>
    VERSION=7.4.4 ;
    CONFIG_MAP=$(oc get cm -l component=db2bigsql -o custom-columns="ConfigMap:{.metadata.name},Instance Id:{.data.instance_id}" | grep ${INSTANCE_ID} | awk '{print $1}')
    
    oc get cm ${CONFIG_MAP} -o yaml | \
       sed "s#\\\"addon_version\\\\\":[ \\\".0-9]*#\\\"addon_version\\\\\": \\\\\"${VERSION}\\\\\"#" | \
       sed "s#Db2 Big SQL v[.0-9]*#Db2 Big SQL v${VERSION}#" | \
       sed "s#icpdata_addon_version:[ .0-9]*#icpdata_addon_version: ${VERSION}#" | \
       oc apply -f- ;

Verifying the instance upgrade

During the upgrade service process, the Db2 Big SQL custom resource status changes from Ready to Upgrading to Not Ready to Ready. To check the status, run the following command:

oc get bigsql -l app.kubernetes.io/name=db2-bigsql

To confirm that the upgrade was successful and the cluster is operational, run a smoke test as the db2inst1 user:

head_pod=$(oc get pod -l app=bigsql-<instance_id>,name=dashmpp-head-0 --no-headers=true -o=custom-columns=NAME:.metadata.name)

# If connected to a Hadoop cluster 
oc exec -it $head_pod -- /usr/bin/su - db2inst1 -c '/usr/ibmpacks/current/bigsql/bigsql/install/bigsql-smoke.sh'

# If connected exclusively to an Object Store service, you must provide the name of a bucket that exists on the storage service to execute the smoke test
oc exec -it $head_pod -- /usr/bin/su - db2inst1 -c '/usr/ibmpacks/current/bigsql/bigsql/install/bigsql-smoke.sh -o<bucket_name>'

What to do next

If you made any custom configuration changes in the version 4.0.x release, review and confirm that these changes are still present after the upgrade is completed. If they are not, reapply them.