Upgrading Db2 Big SQL from Version 4.7 to Version 5.0
An instance administrator can upgrade Db2 Big SQL from Cloud Pak for Data Version 4.7 to Version 5.0.
- Who needs to complete this task?
-
Instance administrator To upgrade Db2 Big SQL, you must be an instance administrator. An instance administrator has permission to manage software in the following projects:
- The operators project for the instance
-
The operators for this instance of Db2 Big SQL are installed in the operators project. In the upgrade commands, the
${PROJECT_CPD_INST_OPERATORS}environment variable refers to the operators project. - The operands project for the instance
-
The custom resources for the control plane and Db2 Big SQL are installed in the operands project. In the upgrade commands, the
${PROJECT_CPD_INST_OPERANDS}environment variable refers to the operands project.
- When do you need to complete this task?
-
Review the following options to determine whether you need to complete this task:
- If you want to upgrade the control plane and one or more services at the same time, follow the process in Upgrading an instance of Cloud Pak for Data instead.
- If you didn't upgrade Db2 Big
SQL when you upgraded the control plane, complete this task to upgrade Db2 Big
SQL.
Repeat as needed If you are responsible for multiple instances of Cloud Pak for Data, you can repeat this task to upgrade more instances of Db2 Big SQL on the cluster.
Information you need to complete this task
Review the following information before you upgrade Db2 Big SQL:
- Version requirements
-
All the components that are associated with an instance of Cloud Pak for Data must be installed at the same release. For example, if the Cloud Pak for Data control plane is at Version 5.0.3, you must upgrade Db2 Big SQL to Version 5.0.3.
- Environment variables
- The commands in this task use environment variables so that you can run the commands exactly as
written.
- If you don't have the script that defines the environment variables, see Setting up installation environment variables.
- To use the environment variables from the script, you must source the environment variables
before you run the commands in this task. For example,
run:
source ./cpd_vars.sh
- Storage requirements
- You don't need to specify storage when you upgrade Db2 Big SQL.
Before you begin
This task assumes that the following prerequisites are met:
| Prerequisite | Where to find more information |
|---|---|
| The cluster meets the minimum requirements for Db2 Big SQL. | If this task is not complete, see System requirements. |
The workstation from which you will run the upgrade is set up as a client workstation and
has the following command-line interfaces:
|
If this task is not complete, see Updating client workstations. |
| The Cloud Pak for Data control plane is upgraded. | If this task is not complete, see Upgrading an instance of Cloud Pak for Data. |
| For environments that use a private container registry, such as air-gapped environments, the Db2 Big SQL software images are mirrored to the private container registry. | If this task is not complete, see Mirroring images to a private container registry. |
For environments that use a private container registry, such as air-gapped environments,
the cpd-cli is configured to pull the olm-utils-v3 image from the private container registry. |
If this task is not complete, see Pulling the olm-utils-v3 image from the private container registry. |
Procedure
Complete the following tasks to upgrade Db2 Big SQL:
Upgrading the service
cpd-cli
manage
apply-olm updates all of the OLM objects in the operators project
at the same time.To upgrade Db2 Big SQL:
-
Log the
cpd-cliin to the Red Hat OpenShift Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Update the custom resource for Db2 Big
SQL.
cpd-cli manage apply-cr \ --components=bigsql \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --license_acceptance=true \ --upgrade=true
Validating the upgrade
apply-cr command
returns:[SUCCESS]... The apply-cr command ran successfully
If you want to confirm that the custom resource status is
Completed, you can run the cpd-cli
manage
get-cr-status command:
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--components=bigsql
Upgrading existing service instances
After you upgrade Db2 Big SQL, you must upgrade any service instances that are associated with Db2 Big SQL.
To upgrade the service instances:
- Before you begin
-
Create a profile on the workstation from which you will upgrade the service instances.
The profile must be associated with a Cloud Pak for Data user who has either the following permissions:
- Create service instances (
can_provision) - Manage service instances (
manage_service_instances)
For more information, see Creating a profile to use the cpd-cli management commands.
- Create service instances (
- If you are upgrading Db2 Big
SQL from Cloud Pak for Data 4.7.0, 4.7.1, or 4.7.2, in the Cloud Pak for Data instance project, disable
FGAC:
for headnode in $(oc get pods -l icpdsupport/podSelector==bigsql-compute,name==dashmpp-head-0 --no-headers -o custom-columns=POD:.metadata.name) ; do oc exec -ti -c db2u ${headnode} -- su - db2inst1 -c "db2 -td@ -f /opt/dv/current/build_time/fgac/dv-fgac-disable-ddl.sql ; echo FGAC disabled on ${headnode}" ; doneExpected output of the command:DB20000I The SQL command completed successfully. -
Log the
cpd-cliin to the Red Hat OpenShift Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Get the list of Db2 Big
SQL service
instances:
cpd-cli service-instance list \ --service-type=bigsql \ --profile=${CPD_PROFILE_NAME} - Set the
INSTANCE_NAMEenvironment variable to the name of the service instance that you want to upgrade:export INSTANCE_NAME=<instance-name> - Upgrade the service
instance:
cpd-cli service-instance upgrade \ --service-type=bigsql \ --instance-name=${INSTANCE_NAME} \ --profile=${CPD_PROFILE_NAME} - Repeat the preceding steps to upgrade each service instance associated with this instance of Cloud Pak for Data.
Verifying the instance upgrade
Ready to Upgrading to Not
Ready to Ready. To check the status, run the following
command:oc get bigsql -l app.kubernetes.io/name=db2-bigsql- If you are connected to a Hadoop cluster,
run the following
commands:
head_pod=$(oc get pod -l app=bigsql-<instance_id>,name=dashmpp-head-0 --no-headers=true -o=custom-columns=NAME:.metadata.name) # If connected to a Hadoop cluster oc exec -it $head_pod -- /usr/bin/su - db2inst1 -c '/usr/ibmpacks/current/bigsql/bigsql/install/bigsql-smoke.sh' - If you are connected to an object store service, run the following
commands:
head_pod=$(oc get pod -l app=bigsql-<instance_id>,name=dashmpp-head-0 --no-headers=true -o=custom-columns=NAME:.metadata.name) # If connected exclusively to an Object Store service, you must provide the name of a bucket that exists on the storage service to execute the smoke test oc exec -it $head_pod -- /usr/bin/su - db2inst1 -c '/usr/ibmpacks/current/bigsql/bigsql/install/bigsql-smoke.sh -o<bucket_name>'
What to do next
After you upgrade your cluster to Red Hat OpenShift Container Platform Version 4.14 or later, Db2 Big SQL is ready to use. To get started with Db2 Big SQL, see Running SQL and exploring your service instance (Db2 Big SQL).