Upgrading and finalizing Big SQL

As a step in an IOP to HDP Big SQL migration, you must upgrade and finalize your version of Big SQL.

Before you begin

If you are performing this step as part of a migration, you must follow all the preceding steps described in Migrating from IOP to HDP.

The HDFS, Hive, and HBase services (and the services they depend on) must be running for you to perform the Big SQL upgrade. HDFS should not be running in safe mode.

If you are upgrading Big SQL in a non-root installation environment, you must follow the steps in Configuring Ambari agents for non-root access to IBM Big SQL.

The Big SQL service must not be in maintenance mode.

About this task

This topic describes how to upgrade and finalize your version of Big SQL.

Procedure

  1. Ensure the umask for the root and bigsql users is 0022. Run the following commands on each node on the cluster as the root user or a user with sudo privileges to run the following commands as root:
    sudo su - root -c umask
    sudo su - bigsql -c umask
  2. Disable TCP/IP connections to the Big SQL service.
    Perform the following steps on the BigSQL HeadNode as the BigSQL user:
    db2set DB2COMM=
    /usr/ibmpacks/current/bigsql/bigsql/bin/bigsql stop
    /usr/ibmpacks/current/bigsql/bigsql/bin/bigsql start
  3. Stop db2 level auditing of Big SQL.
    Login to the headnode as the Big SQL user and issue the following commands to stop auditing:
    db2audit stop
  4. (Migration only) Enable the Big SQL service extension by running the following command on the Ambari server host:
    /var/lib/ambari-server/resources/extensions/IBM-Big_SQL/5.0.1.0/scripts/EnableBigSQLExtension.py
    For details on this script, see Enabling the Big SQL extension.
  5. Use the Upgrade option of the bigsql_upgrade.py script to upgrade your version of Big SQL.
    The upgrade phase upgrades the Big SQL catalog, metadata and configuration information to the new version of Big SQL. To perform the upgrade option of the Big SQL upgrade, run the bigsql_upgrade.py script with the -m option and the value Upgrade. Include any additional options as documented in bigsql_upgrade.py - Big SQL upgrade utility For example, if you have configured Ambari for non-root access, you should use the -a option.
    python /usr/ibmpacks/scripts/5.0.1.0/upgrade/bigsql_upgrade.py -m Upgrade

    When the Upgrade phase is complete, the Big SQL service is not visible in the Ambari dashboard. However, the new version of BigInisghts Big SQL is operational and running. It is possible to connect applications to the BigInisghts Big SQL server to run sanity tests before proceeding with the Finalize phase of the upgrade.

    It is not possible to re-run the upgrade phase immmediately after it has completed (succesfully or not).

  6. In case the previous upgrade step fails, follow these steps:
    1. Consult the script output or the upgrade log located at /var/ibm/bigsql/logs/upgrade.log on the Ambari server host to identify the problem.
    2. Use the Restore option of the bigsql_upgrade.py script to restore to pre-upgrade conditions. The restore phase returns the cluster to a state that allows you to re-run the upgrade phase, by restoring the backup taken in the backup phase.
      python /usr/ibmpacks/scripts/5.0.1.0/upgrade/bigsql_upgrade.py -m Restore
      When the Restore is complete, the Big SQL service is no longer visible in the Ambari dashboard. However, it is operational, but not running. If needed, you can start the service from the command line and use it. In this case, the version is the initial Big SQL version. After the restore phase is complete, proceed to run the upgrade phase again as described above. In case the restore phase fails, consult the script output or the upgrade log located at /var/ibm/bigsql/logs/upgrade.log to identify and resolve the problem. After it is resolved, re-run the restore phase
    3. Repair the issue that caused the failure.
    4. Re-run the upgrade command as shown in Step 1.
  7. (Upgrade only) If upgrading from Big SQL 5.0.0.0, disable the previous stack extension by running the disable extension script from the 5.0.0.0 location:
    /var/lib/ambari-server/resources/extensions/IBM-Big_SQL/DisableBigSQLExtension.py
  8. (Upgrade only) Enable the Big SQL service extension by running the command:
    /var/lib/ambari-server/resources/extensions/IBM-Big_SQL/5.0.1.0/scripts/EnableBigSQLExtension.py
    For details on this script, see Enabling the Big SQL extension.
  9. If you removed the IBM value-added services, install the JSQSH package with the command:
    sudo  yum install -y IBM-jsqsh-4.10.3.2
  10. Re-enable your TCP/IP connections by logging on as the Big SQL user to the head node and issuing the following command:
    db2set DB2COMM=TCPIP
    /usr/ibmpacks/current/bigsql/bigsql/bin/bigsql stop
    /usr/ibmpacks/current/bigsql/bigsql/bin/bigsql start
  11. Use the Finalize option of the bigsql_upgrade.py script to finalize your upgrade of Big SQL.
    CAUTION:
    After the upgrade is finalized, the backups of the catalog, metadata and configuration information of Big SQL are cleaned up and are no longer available.
    The finalize phase takes the following actions on all nodes of the cluster:
    1. Registers the Big SQL service in the Ambari dashboard.
    2. Cleans up the binaries of the previous Big SQL version.
    3. Cleans up the backups that were created during the backup phase.
    To perform the finalize phase of the Big SQL upgrade, run the bigsql_upgrade.py script with the -m option and the value Finalize. Include any additional options as documented in the bigsql_upgrade.py - Big SQL upgrade utility. For example, if you have configured Ambari for non-root access, you should use the -a option.
    python /usr/ibmpacks/scripts/5.0.1.0/upgrade/bigsql_upgrade.py -m Finalize

    When the Finalize phase is complete, the backups no longer exist. The Big SQL service is visible in the Ambari dashboard. The new version of Big SQL is operational and running.

    In case of failure of the finalize phase, consult the script output or the upgrade log located at /var/ibm/bigsql/logs/upgrade.log to identify and resolve the problem. After it is resolved, re-run the finalize phase.

  12. Start db2 level auditing of Big SQL.
    You can restart auditing from the Big SQL user on the headnode by issuing the command
    db2audit start
  13. Navigate to Hive > Configs > Advanced > Advanced hive-env template and manually uncomment the following section (if required):
    # Allow Hive to read Big SQL HBase tables 
    if 
      [ -d "/usr/ibmpacks/current/bigsql/bigsql/lib/java" ]; then 
      export HIVE_AUX_JARS_PATH=\ /usr/ibmpacks/current/bigsql/bigsql/lib/java/biga-io.jar,\ 
      /usr/ibmpacks/current/bigsql/bigsql/lib/java/biga-hbase.jar,\ 
      /usr/ibmpacks/current/bigsql/bigsql/lib/java/commoncatalog.jar,\ 
      /usr/ibmpacks/current/bigsql/hive/lib/hive-hbase-handler.jar,\ 
      ${HIVE_AUX_JARS_PATH} 
    fi
  14. (optional) Remove old configs that cause database check warnings.
    1. Back up the Ambari server database.
    2. Log in to the Ambari server database shell prompt. Ambari can use various types of database management system (DBSM) that each has its own command to open a shell prompt. For the default PostgreSQL database, as the root user, run psql -U ambari -d ambari in a Linux bash shell. Enter bigdata when asked for a password.
    3. From the database shell prompt, run the following command (replacing ${type_name} and ${version_tag} with the types and versions displayed in the warning):
       DELETE FROM clusterconfig WHERE type_name='${type_name}' AND version_tag='${version_tag}';
      For example, if you see the following warning:
       2017-08-16 07:24:53,209  WARN - You have config(s): bigsql-users-env-version1501346872884, bigsql-head-env-version1502892246 that is(are) not mapped (in serviceconfigmapping table) to any service!
      This indicates two configs which are being flagged as a problem:
      bigsql-users-env-version1501346872884
      bigsql-head-env-version1502892246
      In the Ambari database, you need to run two deletes, one for each configuration flagged as problematic. The ${type_name} parameter coincides with everything before the last "-" in the configuration. For example, in bigsql-users-env-version1501346872884, the ${type_name} is bigsql-users-env and the ${version_tag} is version1501346872884. Therefore your delete statement is:
      DELETE FROM clusterconfig WHERE type_name='bigsql-users-env' AND version_tag='version1501346872884';
      For the Configs bigsql-head-env-version1502892246, the delete statement is:
      DELETE FROM clusterconfig WHERE type_name='bigsql-head-env' AND version_tag='version1502892246';
      
    4. Exit the database shell prompt. If you are using the default PostgreSQL, type \q to exit.
    5. Restart the Ambari server.

What to do next

The next step in the migration process is to upgrade DSM. See Upgrading IBM Data Server Manager (DSM) for details.