Backing up Db2 Big SQL metadata
It is critical to back up your Db2® Big SQL metadata before you do an upgrade.
Before you begin
The following list describes prerequisites and related information that are required before you back up Db2 Big SQL metadata.
- The Ambari administrator username and password.
- A user with the following attributes:
- Passwordless sudo access on all nodes of the cluster, including the Ambari server itself.
- The ability to connect passwordlessly through ssh from the Ambari server to all Db2 Big SQL nodes.
This user can be root. If the user is not root, the username must be passed to the upgrade script with the
-aoption. The upgrade script must be run with root user privilege, which you can do by using the sudo command. - The backup metadata process requires about 5 GB of free disk space in the
/tmp directory. This space is temporarily used by the installation process and
released when the installation is complete. In addition, the backup metadata process requires about
2 GB of free disk space on /usr for Db2
installation. If these disk space requirements are not available, the upgrade prerequisite checker
warns you and stops the installation process.
On each Db2 Big SQL host, the backup metadata process requires sufficient space to back up the contents of the sqllib directory, the Db2 Big SQL database path, and the Db2 Big SQL database directories. This backup is written in compressed format to the /var/ibm/bigsql/upgrade-backup directory.
In particular, on the head node, more space is needed for the backup if Db2 Big SQL contains native Db2 tables, as these tables are stored in the Db2 Big SQL database directories local to the head node. You can run the following command while connected as the bigsql user on the existing head node to check the Db2 Big SQL database size:call get_dbsize_info(?,?,?,-1)This command returns the size of the database in bytes. As the backup is compressed inline as it is being written to disk, the actual space that is needed by the backup is less than size of the database itself, depending on the natural redundancy of the data in the database. The size that is returned by the
get_dbsize_infofunction can be considered an upper limit. - Ambari configuration groups are supported by the upgrade process for HDP services. Configuration groups for the Db2 Big SQL service are not supported. Remove any configuration groups for the Db2 Big SQL service before you initiate the backup metadata process.
- The HDFS, Hive, and HBase (if installed) services, and the services that they depend on, must be
up and running in the Ambari server. HDFS cannot be running in safe mode.
If these services are not running, the upgrade prerequisite checker warns you and stops the patch process.
- YARN integration must be disabled for Db2 Big SQL.
- If the following section exists in the $BIGSQL_HOME/conf/bigsql-conf.xml
file, remove it.
<property> <name>scheduler.force.cppwriter</name> <value>true</value> </property> - Meet Db2 package requirements, as they also apply to Db2 Big SQL. For more information, see the Package requirements section in the Db2 topic Additional installation considerations (Linux).
- Make sure that the cluster and service are healthy. In Ambari, run the service
actions Check Cluster Health and Run Service Check for
the Db2 Big SQL service.Recommendation: Restart the Db2 Big SQL service before you run service actions.
About this task
You back up Db2 Big SQL metadata by using the Db2 Big SQL python script bigsql_upgrade.py.
Procedure
Results
Db2 Big SQL is removed from Ambari. However, it is still operational, but not running. If needed, you can start the service from the command line and use it. In this case, the version that is run is the initial Db2 Big SQL version.