DB2 Advanced Copy Services: The scripted interface

The DB2® Advanced Copy Services supports taking snapshots for backup purposes in DB2 for Linux®, UNIX® and Windows® databases. Customers can use the DB2 ACS API either through libraries implemented by their storage hardware vendors or implement this API on their own. Additionally, it requires a high degree of effort for customers to implement the C-API of DB2 ACS.

Joern Klauke (joern.klauke@de.ibm.com), Staff Development Engineer, IBM Deutschland Research and Development GmbH

Joern KlaukeJoern Klauke works as a software developer for SAP on DB2 for Linux, UNIX, and Windows at the IBM Research and Development Lab in Boeblingen (Germany). He has five years of experience with IBM and DB2 LUW assisting customers with best practices, problem analysis, and troubleshooting. He holds a degree in computer science from the Martin-Luther-University of Halle (Germany).



Martin Jungfer (martin.jungfer@de.ibm.com), Software Engineer, IBM Deutschland Research and Development GmbH

Martin                 Jungfer photoMartin Jungfer has many years of experience and deep knowledge of administrating DB2 for Linux, UNIX, and Windows in SAP environments.



01 August 2013

Also available in Chinese

Introduction

DB2 10.5 introduces a new feature called scripted interface for DB2 Advanced Copy Services (DB2 ACS). This makes it possible for customers to implement shell scripts instead of C-libraries. These scripts can use the tools provided by the storage vendors to run the snapshot operations. You can use the scripted interface independent from your storage hardware. Additionally, DB2 supports storage hardware as soon as it is available in the market.

The feature supports all three architectures of DB2: enterprise server, multipartitioned databases using the database partitioning feature (DPF), and databases using pureScale. It is supported on all UNIX and Linux platforms where DB2 is certified on.

This series provides an introduction to this feature and presents some real-life examples in the coming articles. It describes in detail the parts that make up the feature. The general setup is shown based on the example of the sample customer script provided by DB2.

Figure 1 shows the overall structure of the new scripted interface for DB2 ACS:

Figure 1. Structure of the new scripted interface for DB2 ACS
Structure of the new Scripted Interface for DB2 ACS

You can see the three important parts of this feature: the DB2 server including the scripted interface for DB2 ACS, the protocol file repository, and the customer script. The DB2 server triggers the snapshot with the backup command and invokes the customer script. The protocol file repository takes the protocol files where DB2 writes some important information that can be used by the customer script to run a successful snapshot, among others the database name and the paths that have to be copied during the backup. This article takes a further look at all of these components and describes examples for the database JK1 in the instance db2jk1.


Protocol files and protocol file repository

The protocol file repository consists of a directory in the operating system where the snapshots take place. It can be any directory that provides the following privileges:

  • The directory must exist before the command is started.
  • The instance owner of the instance that owns the database has to be able to read and write in this directory.
  • There has to be enough space to hold the protocol files written during the operations.

You can use the following command to run a snapshot backup in DB2 to use the scripted interface:

BACKUP DATBASE JK1
USE SNAPSHOT SCRIPT '$HOME/sqllib/samples/BARVendor/libacssc.sh'
OPTIONS '/repository'

You can run this command in both CLP and ADMIN_CMD:

CALL SYSPROC.ADMIN_CMD("BACKUP DATBASE JK1
USE SNAPSHOT SCRIPT '$HOME/sqllib/samples/BARVendor/libacssc.sh'
OPTIONS '/repository'")

In this command, $HOME/sqllib/samples/BARVendor/libacssc.sh is the location and the name of the customer script; this is the sample customer script provided by DB2. /repository is the directory for the protocol file repository. Note that you have to use the absolute path for both the script name and the repository. If you do not provide the repository, the directory where the customer script is located is used as the protocol file repository. If you want to provide further options to your customer script, use the OPTIONS clause for this as the first parameter is always parsed as the protocol file repository.

The protocol files serve the following two purposes:

  • Provide information on the database that is necessary for DB2 to restore. Therefore, it is also a good idea to back up these files.
  • Give information on the progress of the backup for investigation purposes, (for example, the timestamps of start and end of each function and the commands used to invoke the script starting with "# cmd:")

The name of each protocol file is generated by DB2 with the structure shown in Figure 2:

Figure 2. Protocol file name generated by DB2
Protocol files name generated by DB2

Both DB2 and the customer script write information to the protocol files, but at the moment, no information written by the customer script is evaluated by DB2. If you want to write your own options to the protocol files, it would be best to use USER as the common prefix to avoid future collisions with new DB2 options. DB2 uses key-value-pairs for the options, separated by the =-sign.

During each operation, a new protocol file is written. The protocol files of the backup operations are kept in both cases, success and failure. Protocol files of failed backups have to be removed manually from the repository to avoid issues during restore operations. Protocol files of other operations like delete, query or restore are only kept in case of failure. You should remove them after investigations.

The protocol files are separated in sections that reflect the function calls to the DB2 ACS API. Each section starts and ends with a timestamp to give an idea of how long each call took. Additionally, the information is provided in each section that would otherwise be provided by the DB2 ACS API during that function call. The following code shows the db2ACSBeginOperation call from a protocol file of a backup operation:

# =========================================================
# db2ACSBeginOperation(): BEGIN [Mon Apr 22 05:00:23 2013]
OPERATION=SNAPSHOT
# db2ACSBeginOperation(): END [Mon Apr 22 05:00:23 2013]
# =========================================================

You can see the name of the function, the two timestamps, and the operation specified (in this case, snapshot).

For the subsequent sections, the directory /repository is used as the protocol file repository.

Protocol files of all operations start with a common section for the function db2ACSInitialize that looks like Listing 1:

Listing 1. db2ACSInitialize function
# =========================================================
# db2ACSInitialize(): BEGIN [Mon Apr 22 05:00:23 2013]
EXTERNAL_SCRIPT=/home/db2jk1/sqllib/samples/BARVendor/libacssc.sh
EXTERNAL_OPTIONS=/repository 2ndoption
DB_NAME=JK1
INSTANCE=db2jk1
DBPARTNUM=0
SIGNATURE=SQL10050
# db2ACSInitialize(): END [Mon Apr 22 05:00:23 2013]
# =========================================================

As you can see, this specifies the database name, instance name, and instance version (in other words, SIGNATURE) the command issued. This code snippet is taken from a snapshot operation.


Detailed look at the sample customer script

The following section describes in detail which actions are taken by the sample customer script during the each of the operations.

Each invocation looks like the following:

/home/db2jk1/sqllib/samples/BARVendor/libacssc.sh
–a prepare 
–c /repository/db2acs.JK1.0.db2jk1.1366621223.cfg
/repository 2ndoption

The flag –a is followed by the action to be taken in that invocation; the flag –c is followed by the currently used protocol file. The options that were given in the options clause of the backup command are appended to the script command. Each command is also written to the protocol file by DB2.

As already mentioned, DB2 provides a sample customer script to demonstrate the structure and possible usage with the name libacssc.sh in the instance path $HOME/sqllib/samples/BARVendor. The examples in the next sections use this script. This script handles options by parsing it as soon as it is called by the following lines:

while getopts a:c:o:t: OPTION
do
   case ${OPTION} in
      a) action=${OPTARG}
         ;;
      c) config=${OPTARG}
         ;;
      o) objectId=${OPTARG}
         ;;
      t) timestamp=${OPTARG}
         ;;
      \?) echo "# Unknown parameter '$1'"
   esac
done

The action option (-a) calls the corresponding function in the shell script as shown in the following code snippet, where the strings starting with do are the names of the functions:

case "$action" in
   prepare)
      doPrepare
      ;;
   snapshot)
      doSnapshot
      ;;
   restore)
      doRestore
      ;;
   delete)
      doDelete
      ;;
   verify)
      doVerify
      ;;
   store_metadata)
      doStoreMetaData
      ;;
   rollback)
      doRollback
      ;;
esac

There are actions that the customer script can support that occur during more than one operation. For example, the prepare action occurs during all operations, including snapshot, restore, query, and delete. For this purpose, the calls for these actions should have the following structure to be able to run different steps for different operations:

getSetting "OPERATION"
operation=$_setting
case "$operation" in
 snapshot)
 ...
 ;;
 delete)
 ...
 ;;
 restore)
 ...
 ;;
 query)
 ...
 ;;
esac

Each action is ended with a return code that is written to the protocol file by DB2 with the common prefix RC_; if the action prepare was terminated in the customer script by exit 1 DB2 writes the line RC_PREPARE=4 to the protocol file.

The sample customer script contains two helping functions. The function getSetting reads a certain option from one protocol file. The most important line in this function is the following:

cmd="awk -F= '/^${1}/ { print \$2 }' $useConfig | head -1"

It reads the complete protocol file with the help of the UNIX tool awk, and parses for the given name of the key. The value is separated by the =-sign. head -1 takes limits the number of returned values to the first one.

The second function storeSetting writes options to the protocol file separating them by the "="-sign, which makes it easier to keep this syntax in the protocol files. It does this with the following line:

echo "$1=$2"

In general, the output of each echo command without a target used in the customer script is written to the protocol script. If you want to use another file for your debugging information, append a target to the echo commands:

echo "$1=$2" >> target_file

Snapshots

During the snapshot operation, the customer script is called four times consisting of prepare, snapshot, verify, and (depending on the result of the verify call) storemetadata or rollback, respectively. Figure 3 shows an overview on the actions for the snapshot operation:

Figure 3. Action overview for the snapshot operation
Action overview for the snapshot operation

When the customer script is called the first time, the information in Listing 2 is already given in the protocol file besides the information given by the db2ACSInitialize call like shown in Figure 3:

Listing 2. Customer script called for the first time
# =========================================================
# db2ACSBeginOperation(): BEGIN [Mon Apr 22 05:00:23 2013]
OPERATION=SNAPSHOT
# db2ACSBeginOperation(): END [Mon Apr 22 05:00:23 2013]
# =========================================================
# db2ACSPartition(): BEGIN [Mon Apr 22 05:00:23 2013]
OBJ_HOST=hal9000
OBJ_OWNER=
OBJ_TYPE=SNAPSHOT
OBJ_DB2ID_LEVEL=0
OBJ_DB2ID_RELEASE=5
OBJ_DB2ID_VERSION=10
APP_OPTIONS=1100
TIMESTAMP=20130422050024
DB2BACKUP_MODE=ONLINE
DB2BACKUP_LOGS=INCLUDE
LOGPATH_PRIMARY=/home/db2jk1/db2jk1/NODE0000/SQL00001/LOGSTREAM0000/
DATAPATH_DB=/home/db2jk1/db2jk1/NODE0000/SQL00001/MEMBER0000/
DATAPATH_LOCAL_DB=/home/db2jk1/db2jk1/NODE0000/sqldbdir/
DATAPATH_DB=/home/db2jk1/db2jk1/NODE0000/SQL00001/
DATAPATH_AUTOSTORAGE=/home/db2jk1/db2jk1/NODE0000/JK1/
# db2ACSPartition(): END [Mon Apr 22 05:00:23 2013]
# =========================================================

Among others, the most important options start with either DATAPATH or LOGPATH; they show the paths and the log directories that have to be included in the snapshot. As you can see, the type of data path is also shown, for example, autostorage paths (DATAPATH_AUTOSTORAGE) and database paths (DATAPATH_DB). The same is true for log directories that are specified as LOGPATH_PRIMARY (for the primary log path) and LOGPATH_MIRROR (the mirrored log path when existent). Of course, log paths are only given if logs have to be included in the snapshot, as in this case. For the complete list of key names that might occur in the protocol files, look at the table shown in the DB2 ACS protocol file section of the DB2 Information Center for DB2 10.5.

The option DB2BACKUP_MODE can take the values ONLINE and OFFLINE and is used to reflect online or offline snapshots. The default is OFFLINE. DB2BACKUP_LOGS takes the values INCLUDE or EXCLUDE depending on if include or exclude logs was specified in the DB2 backup command. The default value is INCLUDE (for both ONLINE and OFFLINE backups).

Prepare

After having written this information to the protocol file, the customer script is invoked the first time with the action prepare:

Listing 3. Customer script invoked for the first time with the action prepare
# =========================================================
# db2ACSPrepare(): BEGIN [Mon Apr 22 05:00:23 2013]
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh
   -a prepare
   -c /repository/db2acs.JK1.0.db2jk1.1366621223.cfg
   /repository 2ndoption
RC_PREPARE=0
# db2ACSPrepare(): END [Mon Apr 22 05:00:23 2013]
# =========================================================

After this, the next call is the snapshot call. In advance to this call if the snapshot is taken online write operations of the database are suspended; that is, WRITE SUSPEND is set automatically on the database. Therefore, the prepare call is the right place to prepare file systems, check space requirements in the storage system, and other things in advance before the snapshot is actually taken. The sample customer script does nothing during that call.

Snapshot

Now that the preparation is done, DB2 can invoke the script to actually do the snapshot. Again, before the invocation, DB2 sets the database in WRITE SUSPEND mode if the backup mode is ONLINE. If exclude logs was specified, writing to the log files continues (for further information on the SET WRITE command, refer to the DB2 Information Center). If the customer script does not write anything to the protocol file, the section for the db2ACSSnapshot looks like Listing 4:

Listing 4. db2ACSSnapshot function
# =========================================================
# db2ACSSnapshot(): BEGIN [Mon Apr 22 05:00:23 2013]
OBJ_ID=0
ACTION=DB2ACS_ACTION_WRITE
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh 
 -a snapshot 
 -c /repository/db2acs.JK1.0.db2jk1.1366621223.cfg
   /repository 2ndoption
RC_SNAPSHOT=0
# db2ACSSnapshot(): END [Mon Apr 22 05:00:25 2013]
# =========================================================

The sample customer script packs two archives: one for the data paths and additionally one for both of the log paths, assuming they exist and log files include was specified.

To snapshot the data files, use the following steps. The script takes all paths whose option names start with DATAPATH and packs them to this tar archive with the following steps:

  1. Construct the file name for the data files:
    file="${repository}${db_name}.0.${instance}.${dbpartnum}.${timestamp}.001.tar"

    As you can see, the database name, the instance name, the partition number, and the timestamp of the snapsots are used for the name. The names are again oriented on the names of common DB2 backups.
  2. Store the file name in the protocol file:
    storeSetting
                   "BACKUP_FILE" $file
  3. Construct the command:
    cmd="awk -F= '/^DATAPATH/ { print
                   \$2; }' $config | xargs tar -cf $file 2>/dev/null && echo 0 ||
                   echo 1"
  4. Write the command to the protocol file:
    echo "# cmd:
                   $cmd"
  5. Run the command and read the return code:
    RC=`eval
                   $cmd`
  6. Write the return code to the protocol file:
    echo "#
                   backup tar created, rc=$RC"

Now, the script has to take care of the log files. First, it determines if log files have to be included by reading the value for the corresponding option:

getSetting "DB2BACKUP_LOGS"
includeLogs=$_setting

Depending on this result, actions are taken:

if [ $includeLogs = "INCLUDE" ]
then
   echo "# Logs to be included"

The name of the tar archive for the log files is constructed the same way as the tar name for the tar file for the data paths and again stored in the protocol file. The tar file for the log files is also packed the same way as the tar file for the data paths, this time the values for the options starting with LOGPATH are taken:

  1. Construct the file name for the data files
    file="${repository}${db_name}.0.${instance}.${dbpartnum}.${timestamp}.log.tar"

    The name is similar to the tar file for the data files but with the string "log" instead of "001".
  2. Store the file name in the protocol file
    storeSetting "BACKUP_FILE" $file
  3. Construct the command:
    cmd="awk -F= '/^LOGPATH/ { print \$2; }' $config |
                     xargs tar -cf $logs 2>/dev/null && echo 0 || echo 1"
  4. Write the command to the protocol file:
    echo "# cmd: $cmd"
  5. Run the command and read the return code:
    RC=`eval $cmd`
  6. Write the return code to the protocol file:
    echo "# tar for logs created, rc=$RC"

Listing 5 is a snippet from a protocol file and shows the complete contents for the snapshot call including all comments written by the sample customer script:

Listing 5. Snippet from a protocol file
# =========================================================
# db2ACSSnapshot(): BEGIN [Mon Apr 22 05:00:23 2013]
OBJ_ID=0
ACTION=DB2ACS_ACTION_WRITE
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh
 -a snapshot
 -c /repository/db2acs.JK1.0.db2jk1.1366621223.cfg 
# /repository 2ndoption
BACKUP_FILE=/repository/JK1.0.db2jk1.0.20130422050024.001.tar
# cmd: awk -F= '/^DATAPATH/ { print $2; }' 
 /repository/db2acs.JK1.0.db2jk1.1366621223.cfg | 
 xargs tar -cf 
 /repository/JK1.0.db2jk1.0.20130422050024.001.tar 
   2>/dev/null && echo 0 || echo 1
# backup tar created, rc=0
# Logs to be included
BACKUP_LOGS=/repository/JK1.0.db2jk1.0.20130422050024.log.tar
# cmd: awk -F= '/^LOGPATH/ { print $2; }' 
 /repository/db2acs.JK1.0.db2jk1.1366621223.cfg
 | xargs tar -cf 
 /repository/JK1.0.db2jk1.0.20130422050024.log.tar 
   2>/dev/null && echo 0 || echo 1
# tar for logs created, rc=0
RC_SNAPSHOT=0
# db2ACSSnapshot(): END [Mon Apr 22 05:00:25 2013]
# =========================================================

Immediately after the customer script has finished the snapshot action and the control was given back to DB2, write operations of the database are allowed again; that is, WRITE RESUME is automatically set if the backup mode in ONLINE.

Verify

The next call that follows is the verify call. The purpose of this call is to check if the snapshot was taken successfully. To demonstrate this, the sample customer script just checks if the needed tar files exist (test with –f) and if the size is greater than zero (test for –s); that is if the tar file of the data files exists with the following lines:

getSetting "BACKUP_FILE"
file=$_setting
if [ -f "$file" -a -s "$file" ]

If include logs is specified and the tar of the log files exists, untar it with the following lines:

getSetting "BACKUP_LOGS"
logs=$_setting
getSetting "DB2BACKUP_LOGS"
includeLogs=$_setting
if [ $includeLogs = "INCLUDE" ]
then
   if [ -f "$logs" -a -s "$logs" ]

If one of these criteria fails, RC_ERROR is returned.

In case everything works correctly, the output, shown in Listing 6, in the protocol file is produced (for example):

Listing 6. db2ACSVerify function
# =========================================================
# db2ACSVerify(): BEGIN [Mon Apr 22 05:00:25 2013]
FIRST_ACTIVE_LOG_ID=1
FIRST_ACTIVE_LOG_CHAIN=0
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh 
 -a verify
 -c /repository/db2acs.JK1.0.db2jk1.1366621223.cfg 
   /repository 2ndoption
# Backup '/repository/JK1.0.db2jk1.0.20130422050024.001.tar' exist
# Logs '/repository/JK1.0.db2jk1.0.20130422050024.log.tar' exist
RC_VERIFY=0
# db2ACSVerify(): END [Mon Apr 22 05:00:25 2013]
# =========================================================

Listing 6 contains two new options that describe the log files that were in use during the snapshot (FIRST_ACTIVE_LOG_ID) and in which log chain they were written (FIRST_ACTIVE_LOG_CHAIN). Both options are needed in case of restore, and in particular, during the rollforward. The success messages of the tests for existence of the verify call in the sample customer script are also contained.

If verify is completed successfully, storemetadata is called; if the verify call failed, rollback is called.

Storemetadata

Assuming the verify call returned success, the next call is storemetadata. This is the right place to save needed additional files that were produced during the snapshot; for example, it is a good idea to save the protocol file to the backup infrastructure: there will be some further information written to it but the restores will succeed with the current status.

Listing 7. db2ACSStoreMetaData function
# =========================================================
# db2ACSStoreMetaData(): BEGIN [Mon Apr 22 05:00:25 2013]
START_TIME=1366621224
METADATA_SIZE=12364
METADATA_CHECKSUM=20058929
METADATA=...
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh 
 -a store_metadata
 -c /repository/db2acs.JK1.0.db2jk1.1366621223.cfg 
   /repository 2ndoption
RC_STORE_METADATA=0
# db2ACSStoreMetaData(): END [Mon Apr 22 05:00:25 2013]
# =========================================================

In Listing 7, METADATA option takes the longest value. It consists of a memory block of DB2 that is encoded in Base64. Any change in this block makes the snapshot image unusable. For DB2 to be able to check the validity during the restore, the METADATA_CHECKSUM is added. The METADATA_SIZE describes the size of this memory block.

The sample customer script does nothing during this call.

Rollback

The verify call returned failure during the snapshot action; the customer script is now called with the action rollback, shown in Listing 8. This is the right place to clean everything that might have been created. The sample customer script extracts the needed names for the tar files from the candidate protocol file and deletes them.

Listing 8. doRollback function
doRollback() {
   getSetting "BACKUP_FILE"
   oldBackup=$_setting
   echo "# Delete old backup file : $oldBackup"
   rm $oldBackup

   getSetting "DB2BACKUP_LOGS"
   includeLogs=$_setting
   if [ $includeLogs = "INCLUDE" ]
   then
      getSetting "BACKUP_LOGS"
      oldLogs=$_setting
      echo "# Delete old backup file : $oldLogs"
      # Delete old logs file
      rm $oldLogs
   fi
}

Terminating the snapshot

After all calls for the snapshot operation are done, DB2 terminates the session and the operation. It adds the return code of the complete operation (RC_OPERATION) to the protocol file. If this is 0, the operation completed successfully and the image can be used for restore; all other return codes disqualify the image for restores.

# =========================================================
# db2ACSEndOperation(): BEGIN [Mon Apr 22 05:00:25 2013]
RC_OPERATION=0
# db2ACSEndOperation(): END [Mon Apr 22 05:00:25 2013]
# =========================================================
# db2ACSTerminate(): BEGIN [Mon Apr 22 05:00:25 2013]
# db2ACSTerminate(): END [Mon Apr 22 05:00:25 2013]
# =========================================================

Restore

During restores, the customer script is only called three times: for prepare, for restore, and if the restore failed for rollback. The following snippets were taken from a restore that failed because the protocol files of successful restores are disregarded.

Note that during restore, query and delete operations two protocol files are opened, one that describes the current operation and a second one temporarily open, read only that describes the snapshot image currently under consideration. In the subsequent sections, the last one is called the candidate protocol file.

Prepare

During the prepare phase, the customer script can provide additionally needed by the protocol files in the protocol file repository.

Before calling the customer script, DB2 already provides some information of the database that has to be restored and with which timestamp, like seen in Listing 9.

Listing 9. db2ACSBeginOperation function
# =========================================================
# db2ACSBeginOperation(): BEGIN [Tue May 21 09:26:54 2013]
OPERATION=RESTORE
# db2ACSBeginOperation(): END [Tue May 21 09:26:54 2013]
# =========================================================
# db2ACSBeginQuery(): BEGIN [Tue May 21 09:26:54 2013]
QUERY_TYPE=SNAPSHOT
QUERY_DBPARTNUM=0
QUERY_DB=JK1
QUERY_INSTANCE=*
QUERY_HOST=*
QUERY_OWNER=*
QUERY_TIMESTAMP=20130521092650
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh
 -a prepare
 -c /repository/db2acs.JK1.0.db2jk1.1369142814.cfg 
   /repository
RC_PREPARE=0
# db2ACSBeginQuery(): END [Tue May 21 09:26:54 2013]
# =========================================================

Querying

It is possible to specify only part of the exact timestamp to the restore command, for example, only the date. DB2 will filter for the images that meet this partial timestamp (for example, if the data was given all images that were taken on this day). The candidates are listed in recurring db2ACSGetNextObject calls like the following:

# =========================================================
# db2ACSGetNextObject(): BEGIN [Tue May 21 09:26:54 2013]
RESULT_0_FILE=/repository/db2acs.JK1.0.db2jk1.1369142810.cfg
# Timestamp 20130521092650
# db2ACSGetNextObject(): END [Tue May 21 09:26:54 2013]
# =========================================================

The candidate protocol files are enumerated along these calls such that the next candidate would take the option RESULT_1_FILE.

After the last image is found, you see an empty call to this function and afterwards the end of the query with the db2ACSEndQuery call:

# =========================================================
# db2ACSGetNextObject(): BEGIN [Tue May 21 09:26:54 2013]
# db2ACSGetNextObject(): END [Tue May 21 09:26:54 2013]
# =========================================================
# db2ACSEndQuery(): BEGIN [Tue May 21 09:26:54 2013]
# db2ACSEndQuery(): END [Tue May 21 09:26:54 2013]
# =========================================================

DB2 chooses the latest image of the candidates and restores this. For example, if you asked for 20130521 and there are two images, the first one taken at 201305210900 and the second at 201305211000, the last one is restored.

If the candidate was chosen, DB2 retrieves the metadata from the candidate protocol file and decodes it as shown here:

# =========================================================
# db2ACSRetrieveMetaData(): BEGIN [Tue May 21 09:26:54 2013]
GET_META_OBJ_ID=0
METADATA_DECODED_SIZE=9272
METADATA_CHECKSUM=-1834173632
# db2ACSRetrieveMetaData(): END [Tue May 21 09:26:54 2013]
# =========================================================

These functions run without interaction to the customer script by DB2 alone.

Restore

During the restore call, the sample customer script simply unpacks the data tar file and if necessary, also the log tar file. DB2 provides the option OBJ_ID before the restore call is done. With the help of this number, the protocol file for the backup image to be restored has to be retrieved:

  1. Read the OBJ_ID:
    getSetting "OBJ_ID"objectId=$_setting
  2. Construct the option to be retrieved:
    key="RESULT_${objectId}_FILE"
  3. Read this option from the current protocol file for the restore – the value for this option will be the name of the protocol file for the backup image to be restored:
    getSetting $keyoldConfig=$_setting

The following steps can now be performed:

  1. Read the name of the tar file that was used for the data files from the candidate protocol file:
    getSetting "BACKUP_FILE" "" $oldConfig
    oldBackup=$_setting
  2. Construct the command to unpack the tar file:
    cmd="tar -xf $oldBackup && echo 0 || echo 1"
  3. Write the command to the protocol file:
    echo "# cmd: $cmd"
  4. Run the command and track the return code:
    RC=`eval $cmd`
    echo "# tar extracted, rc=$RC"

During restores, DB2 provides the option ACTION in the protocol file. This can take two different values: DB2ACS_ACTION_READ_BY_OBJECT and DB2ACS_ACTION_READ_BY_GROUP. The first one means that the complete database has to be restored, including the log files. The second value means that only the data files have to be restored. This is also handled by the sample customer script taking the following steps:

  1. Read this option:
    getSetting "ACTION"
    readAction=$_setting
  2. Check the value (additionally check if the unpacking of the data files completed successfully):
    if [ $readAction = "DB2ACS_ACTION_READ_BY_OBJECT" -a $RC -eq 0 ]
    then
  3. Run the corresponding commands to restore the log files like for restoring the data files:
    getSetting "BACKUP_LOGS" "" $oldConfig
    oldLogs=$_settingcmd="tar -xf $oldLogs && echo 0 || echo 1"echo "# cmd: $cmd"RC=`eval $cmd`

The following snippet reflects these actions in one of the protocol files for restores:

# =========================================================
# db2ACSSnapshot(): BEGIN [Tue May 21 09:26:54 2013]
OBJ_ID=0
ACTION=DB2ACS_ACTION_READ_BY_GROUP
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh
 -a restore
 -c /repository/db2acs.JK1.0.db2jk1.1369142814.cfg 
   /repository
# cmd: tar -xf /repository/JK1.0.db2jk1.0.20130521092650.001.tar && echo 0 || echo 1
# tar extracted, rc=0
RC_RESTORE=4
# db2ACSSnapshot(): END [Tue May 21 09:26:54 2013]
# =========================================================

Rollback

If the restore call failed, which it did in this example, the customer script is again invoked by DB2 with the action rollback. The following code shows the call for the rollback action. The sample customer script also does nothing during this call for restores checking the operation with an if-clause.

# =========================================================
# db2ACSEndOperation(): BEGIN [Tue May 21 09:26:54 2013]
RC_OPERATION=1
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh 
 -a rollback
   -c /repository/db2acs.JK1.0.db2jk1.1369142814.cfg /repository
RC_ROLLBACK=0
# db2ACSEndOperation(): END [Tue May 21 09:26:54 2013]
# =========================================================

The operation is finally terminated by DB2:

# =========================================================
# db2ACSTerminate(): BEGIN [Tue May 21 09:26:54 2013]
# db2ACSTerminate(): END [Tue May 21 09:26:54 2013]
# =========================================================

Query

You can use db2acsutil to delete images from or query a certain protocol file Repository. The following syntax is now supported:

db2acsutil script "/home/db2jk1/sqllib/samples/BARVendor/libacssc.sh" options 
  "/repository" query status

The rules for the protocol file repository also apply in the db2acsutil command:

  • If options are provided, the first option is always used as the protocol file repository.
  • If no options are provided, the directory of the customer script is used.

All possible flags for db2acsutil are also supported; therefore, you can filter for database, instance, hostname, database partition number and, of course, timestamp like in the following command:

db2acsutil script 
  "/home/db2jk1/sqllib/samples/BARVendor/libacssc.sh" options "/repository"
query status
taken at 20130521 
database JK1 
instance db2jk1 
dbpartitionnum 0
show detail

The following shows a possible result for this:

                       Instance : db2jk1
                       Database : JK1
                      Partition : 0
                Image timestamp : 20130522051311
                           Host : hal9000
                          Owner :
                    DB2 Version : 10.5.0
                  Creation time : Wed May 22 05:13:11 2013
  First active log (chain:file) : 2:13
                 Metadata bytes : 12364
                 Progress state : Successful
                Usability state : Unknown
                Bytes completed : 0
                    Bytes total : 0

The output specifies on which database in which instance the snapshot was taken from and when it was taken. The progress state shows successful when the option OPERATION has the value 0 in the protocol file, failed if the option OPERATION has any other value, and in progress if it is not yet present in the protocol file. The usability state always shows unknown for images taken by the scripted interface because DB2 does not know if it is really usable.

The prepare action is the only action the customer script is invoked with during the query operation. The information given before that call is essentially the same like during the other prepare calls, as can be seen in the following code:

# =====================================================================
# db2ACSBeginQuery(): BEGIN [Wed May 22 08:17:40 2013]
QUERY_TYPE=ALL
QUERY_DBPARTNUM=-1
QUERY_DB=*
QUERY_INSTANCE=*
QUERY_HOST=*
QUERY_OWNER=*
QUERY_TIMESTAMP=*
OPERATION=QUERY
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh -a prepare -c
  /home/db2jk1/repository/db2acs.0.1369225060.cfg /home/db2jk1/repository 2ndoption
RC_PREPARE=0
# db2ACSBeginQuery(): END [Wed May 22 08:17:40 2013]
# =====================================================================

The prepare call can provide protocol files in the protocol file repository, for example restored from the backup infrastructure or any other actions. The prepare call is not implemented in the sample customer script. The section db2ACSBeginQuery already shows criteria on which the candidates are filtered.

Afterwards, DB2 loops over all available protocol files and filters them for the given criteria, including for operation SNAPSHOT. The candidates DB2 considers are shown in the protocol file, for example, in the following code:

# =====================================================================
# db2ACSGetNextObject(): BEGIN [Wed May 22 05:13:28 2013]
RESULT_2_FILE=/home/db2jk1/repository/db2acs.JK1.0.db2jk1.1369214000.cfg
# Timestamp 20130522051322
# db2ACSGetNextObject(): END [Wed May 22 05:13:28 2013]
# =====================================================================

The output is given for all candidates that meet the criteria.

The operation is finally closed, shown here:

# =====================================================================
# db2ACSGetNextObject(): BEGIN [Wed May 22 08:17:40 2013]
# db2ACSGetNextObject(): END [Wed May 22 08:17:40 2013]
# =====================================================================
# db2ACSEndQuery(): BEGIN [Wed May 22 08:17:40 2013]
# db2ACSEndQuery(): END [Wed May 22 08:17:40 2013]
# =====================================================================
# db2ACSTerminate(): BEGIN [Wed May 22 08:17:40 2013]
# db2ACSTerminate(): END [Wed May 22 08:17:40 2013]
# =====================================================================

Delete

It is also possible to use db2acsutil to delete images taken with the scripted interface using the following command:

db2acsutil script "$HOME/sqllib/samples/BARVendor/libacssc.sh" 
  options "/repository" query status

Again, all additional filter options apply.

With this command, the first action that is used to call the script is again prepare, which should take the similar actions like during restore or query:

# =====================================================================
# db2ACSBeginOperation(): BEGIN [Wed May 22 05:13:28 2013]
OPERATION=DELETE
# db2ACSBeginOperation(): END [Wed May 22 05:13:28 2013]
# =====================================================================
# db2ACSBeginQuery(): BEGIN [Wed May 22 05:13:28 2013]
QUERY_TYPE=ALL
QUERY_DBPARTNUM=-1
QUERY_DB=SAMPLE
QUERY_INSTANCE=*
QUERY_HOST=*
QUERY_OWNER=*
QUERY_TIMESTAMP=*
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh 
 -a prepare
 -c /repository/db2acs.0.1369214008.cfg 
   /repository
RC_PREPARE=0
# db2ACSBeginQuery(): END [Wed May 22 05:13:28 2013]
# =====================================================================

After the prepare call, DB2 starts again a query over all available protocol files and list the candidates in the current protocol file:

# =====================================================================
# db2ACSGetNextObject(): BEGIN [Wed May 22 05:13:28 2013]
RESULT_0_FILE=/repository/db2acs.JK1.0.db2jk1.1369213989.cfg
# Timestamp 20130522051311
# db2ACSGetNextObject(): END [Wed May 22 05:13:28 2013]
# =====================================================================

Afterwards, DB2 enters again a loop calls db2ACSDelete for every image to be deleted, which in turn invokes the customer script with the delete action:

# =====================================================================
# db2ACSDelete(): BEGIN [Wed May 22 05:13:29 2013]
DELETE_OBJ_ID=0
# cmd: /home/db2jk1/sqllib/samples/BARVendor/libacssc.sh
   -a delete
   -o 0
   -t 20130522051311
   -c /repository/db2acs.0.1369214008.cfg
   /repository
# Delete old backup file and logs: /repository/SAMPLE.0.db2jk1.0.20130522051311.001.tar
# Delete old backup file : /repository/SAMPLE.0.db2jk1.0.20130522051311.log.tar
RC_DELETE=4
# db2ACSDelete(): END [Wed May 22 05:13:29 2013]
# =====================================================================

To make the handling easier, this call takes two other options, the first one is the flag –o that gives the object ID of the image to be deleted in this action. Additionally, the timestamp of this image is provided with –t. The sample customer script essentially deletes the tar files with the following steps:

  1. Construct the option to read the appropriate protocol file:
    key="RESULT_${objectId}_FILE"
  2. Read the option and parse the value:
    getSetting $key
    oldConfig=$_setting
  3. Read the name of the data tar file from this protocol file:
    getSetting "BACKUP_FILE" "" $oldConfig
    oldBackup=$_setting
  4. Delete the data tar file:
    rm $oldBackup
  5. Read the option DB2BACKUP_LOGS to check if logs were included:
    getSetting "DB2BACKUP_LOGS" "" $oldConfig
    includeLogs=$_setting
  6. If the log files were included:
    if [ $includeLogs = "INCLUDE" ]
    then
  7. Read the name of the tar file used for the logs:
    getSetting "BACKUP_LOGS" "" $oldConfig
    oldLogs=$_setting
  8. Remove the log files:
    rm $oldLogs

After each successful invocation of the delete action to the customer script, the corresponding protocol file is deleted by DB2.

At the end, the delete operation is terminated what is also shown in the protocol files as shown in the following snippet:

# =====================================================================
# db2ACSEndQuery(): BEGIN [Wed May 22 05:13:29 2013]
# db2ACSEndQuery(): END [Wed May 22 05:13:29 2013]
# =====================================================================
# db2ACSEndOperation(): BEGIN [Wed May 22 05:13:29 2013]
RC_OPERATION=0
# db2ACSEndOperation(): END [Wed May 22 05:13:29 2013]
# =====================================================================
# db2ACSTerminate(): BEGIN [Wed May 22 05:13:29 2013]
# db2ACSTerminate(): END [Wed May 22 05:13:29 2013]
# =====================================================================

Conclusion

This article introduced the new scripted interface for DB2 Advanced Copy Services. It explained the protocol files, the customer scripts, and every operation in detail. The next parts in this series will take a look at more real-world examples.

Resources

Learn

Get products and technologies

  • Build your next development project with IBM trial software, available for download directly from developerWorks.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=939273
ArticleTitle=DB2 Advanced Copy Services: The scripted interface
publish-date=08012013