Upgrading and finalizing Db2 Big SQL

This step in the upgrade process shows you how to upgrade and finalize your version of Db2® Big SQL.

Before you begin

The upgrade and finalize Db2 Big SQL step has the following requirements:

  • The Cloudera Manager administrator user name and password.
  • A user with the following attributes:
    • Passwordless sudo access on all nodes of the cluster, including the Cloudera Manager server itself.
    • The ability to connect passwordlessly through ssh from Cloudera Manager to all Db2 Big SQL nodes.
  • If Db2 Big SQL was installed by a non-root user, you must configure non-root access to Db2 Big SQL. For more information, see Configuring non-root access to Db2 Big SQL.

Procedure

  1. On all Db2 Big SQL nodes (head and workers), edit the /usr/ibmpacks/current/bigsql/bigsql/libexec/bigsql-hdpenv.sh file and add the following line to the end of this file:
    export HADOOP_CONF_DIR=/etc/hadoop/conf
  2. Run the Db2 Big SQL configuration utility and set the Cloudera Manager connection information:
    • CM_HOST
    • CM_PORT
    • CM_ADMIN_USER
    • If https is used to access Cloudera Manager, CM_SSL_CA_CERTIFICATE_PATH
  3. Use the Upgrade option of the bigsql_upgrade.py utility to upgrade your version of Db2 Big SQL.
    The upgrade phase upgrades the Db2 Big SQL catalog, metadata and configuration information to the new version of Db2 Big SQL, and registers the service in Cloudera Manager. To perform the upgrade option of the Db2 Big SQL upgrade, run the bigsql_upgrade.py utility with the -m option and the value Upgrade. Include any additional options as documented in bigsql_upgrade.py utility. For example, if you have configured Ambari for non-root access, you should use the -a option.
    python /usr/ibmpacks/IBM-Big_SQL/7.1.0.0/upgrade/bigsql_upgrade.py -m Upgrade

    When the Upgrade phase is complete, the Db2 Big SQL service is running and showing in the Cloudera Manager dashboard, and is ready to use again. It is possible to run sanity tests from the command line or through a standard JDBC connection before proceeding with the Finalize phase of the upgrade.

    It is not possible to re-run the upgrade phase immediately after it has completed (successfully or not).

  4. In case the previous upgrade step fails, follow these steps:
    1. Consult the utility output or the upgrade log located at /var/ibm/bigsql/logs/upgrade.log on the Db2 Big SQL head node to identify the problem.
    2. Use the Restore option of the bigsql_upgrade.py utility to restore to pre-upgrade conditions. The restore phase returns the cluster to a state that allows you to re-run the upgrade phase, by restoring the backup taken in the backup phase.
      python /usr/ibmpacks/IBM-Big_SQL/7.1.0.0/upgrade/bigsql_upgrade.py -m Restore
      When the Restore is complete, the Db2 Big SQL service is no longer visible in the Cloudera Manager dashboard. However, it is operational, but not running. If needed, you can start the service from the command line and use it. In this case, the version is the initial Db2 Big SQL version. After the restore phase is complete, proceed to run the upgrade phase again as described above. In case the restore phase fails, consult the utility output or the upgrade log located at /var/ibm/bigsql/logs/upgrade.log to identify and resolve the problem. After it is resolved, re-run the restore phase.
    3. Repair the upgrade issue that caused the failure.
    4. Re-run the upgrade command as shown in Step 2.
  5. Use the Finalize option of the bigsql_upgrade.py utility to finalize your upgrade of Db2 Big SQL.
    CAUTION:
    After the upgrade is finalized, the backups of the catalog, metadata and configuration information of Db2 Big SQL are cleaned up and are no longer available.
    The finalize phase takes the following actions on all nodes of the cluster:
    1. Cleans up the binaries of the previous Db2 Big SQL version.
    2. Cleans up the backups that were created during the backup phase.
    To perform the finalize phase of the Db2 Big SQL upgrade, run the bigsql_upgrade.py utility with the -m option and the value Finalize. Include any additional options as documented in the bigsql_upgrade.py utility. For example, if you have configured Cloudera Manager for non-root access, you should use the -a option.
    python /usr/ibmpacks/IBM-Big_SQL/7.1.0.0/upgrade/bigsql_upgrade.py -m Finalize

    When the Finalize phase is complete, the backups no longer exist. The Db2 Big SQL service is visible in the Cloudera Manager dashboard. The new version of Db2 Big SQL is operational and running.

    In case of failure of the finalize phase, consult the utility output or the upgrade log located at /var/ibm/bigsql/logs/upgrade.log to identify and resolve the problem. After it is resolved, re-run the finalize phase.

  6. If you disabled High Availability (HA) prior to backing up the Db2 Big SQL metadata, re-enable it.
  7. If the Db2 Big SQL plugin for Ranger was disabled prior to backing up the Db2 Big SQL metadata, re-enable it.
  8. If you are upgrading Db2 Big SQL 5.0.4 and you backed up and removed Data Server Manager (DSM), and you want to use the Db2 Big SQL console, do the following steps.
    1. Install the Db2 Big SQL console.
    2. Restore the Db2 Big SQL console database by using the DSM database that you backed up before you removed DSM.
      cp -arf /tmp/default_rep_db /usr/ibmpacks/bigsql-uc/7.1.0.0/ibm-datasrvrmgr/Config
    3. In Cloudera Manager, start the Db2 Big SQL console.
  9. If you are upgrading Db2 Big SQL 6.0.0 and you backed up and removed the Db2 Big SQL console, do the following steps.
    1. Install the Db2 Big SQL console.
    2. Restore the Db2 Big SQL console database by using the database that you backed up before you removed the Db2 Big SQL console.
      cp -arf /tmp/default_rep_db /usr/ibmpacks/bigsql-uc/7.1.0.0/ibm-datasrvrmgr/Config
    3. In Cloudera Manager, start the Db2 Big SQL console.
  10. Validate the upgrade with the Db2 Big SQL cluster administration utility.
    cd /usr/ibmpacks/IBM-Big_SQL/7.1.0.0/bigsql-cli/
    ./bigsql-admin -smoke
    Tip: To run data load tests, run the command /usr/ibmpacks/IBM-Big_SQL/7.1.0.0/bigsql-cli/BIGSQL/package/scripts/bigsql-smoke.sh -l. If you get a File does not exist error, it might be because the mr-framework.tar.gz file was not copied to the correct location during the CDP installation. To resolve this problem, see Missing mr-framework.tar.gz file.

What to do next

To enable High Availability (HA) after an upgrade, you must first delete the old Ambari repos from the /etc/yum.repos.d directory on the primary and standby head nodes.

If Ranger support for Db2 Big SQL is enabled, policies must be imported back into Ranger. See Migrating Db2 Big SQL Ranger policies.