Configuring DB2 pureScale for backup and restore using Storwize V7000 V6.4.0.1 FlashCopy

The DB2® pureScale® feature leverages the IBM® Storwize® V7000 storage system FlashCopy® services to mitigate critical requirements from customers efficiently. The FlashCopy function enables you to make point-in-time, full-volume copies of data, with the copies immediately available for read or write access. Read on to learn the steps required to deploy a FlashCopy solution with the Storwize V7000 storage subsystem.

Share:

Aslam Nomani (aslam@ca.ibm.com), STSM, DB2 Quality Assurance, IBM Toronto Lab

Aslam NomaniAslam Nomani has been working with the Database Technology (DBT) team in the IBM Toronto Laboratory for five years. For the past four years, he has worked in the DB2 Universal Database (UDB) System Verification Test department. Aslam has worked extensively in testing DB2 Universal Database in high availability environments. He is currently a team lead within the DBT test organization.



Saroj Tripathy (stripat2@in.ibm.com), Advisory Software Engineer, IBM

Saroj TripathySaroj Kumar Tripathy has been working in system verification testing and functional verification testing for DB2 since 2010 at IBM. He is an IBM DB2 Certified Advanced DBA. His area of expertise is DB2 high availability, HADR and expert security solutions.



Sumair Kayani (skayani@ca.ibm.com), System Engineer, System Optimization and Competency Center, IBM

Sumair KayaniSumair Kayani is a systems engineer at IBM Toronto lab. As a member of Systems Optimization Competency Center, his responsibilities include optimization of software and hardware components that compromise the IBM integrated systems. His role as a systems administrator focused on System P, and Storage Area Networks.



Madhusudan KJ (madhusudankj@in.ibm.com), DB2 Software Development, IBM

Madhusudan KJMadhusudan KJ has been working in System Verification Testing for DB2 since 2006. He is an IBM DB2 Certified Advanced DBA. He has special interest in DB2 high availability, backup, and recovery solutions.



Ian Boden (ianboden@uk.ibm.com), SAN Volume Controller and Storwize V7000 Software Development, IBM

Ian BodenIan Boden joined IBM in 2007 after completion of his master's in computer systems and software engineering. For the past five years, he has been working on IBM SAN Volume and Storwize V7000. His area of expertise is cache algorithms and performance.



03 January 2013

Also available in Chinese

Introduction

In today's highly competitive marketplace, it is important to deploy a data processing architecture that not only meets your immediate tactical needs but that also provides the flexibility to grow and change to adapt to your future strategic requirements. In December 2009, IBM introduced the DB2 pureScale Feature for Enterprise Server Edition (DB2 pureScale Feature). The DB2 pureScale Feature leverages an active-active shared-disk database implementation based on the DB2 for z/OS® data-sharing architecture. It leverages proven technology from the DB2 database software on the mainframe to bring the active-active shared-disk technology to open systems. The DB2 pureScale Feature meets the needs of many customers by providing the following key benefits:

  • Virtually unlimited capacity: DB2 pureScale provides practically unlimited capacity by allowing addition and removal of members on demand. DB2 pureScale can scale to 128 members and has a highly efficient centralized management facility that allows for superior scale-out compared to peer-to-peer models. DB2 pureScale also leverages a technology called Remote Direct Memory Access, which provides highly efficient inter-node communication mechanism that also facilitates in the superior scaling ability.
  • Application transparency: An application that runs in a DB2 pureScale environment does not need to have any knowledge of the different members in the cluster or of partitioning data. The DB2 pureScale Feature automatically routes applications to the members deemed most appropriate. The DB2 pureScale Feature also provides native support for syntax that other database vendors use, which enables those applications to run in a DB2 pureScale environment with minimal or no changes.
  • Continuous availability: The DB2 pureScale Feature provides a fully active-active configuration such that if one member goes down, processing continues at the remaining active members. During a failure, only data being modified on the failing member is temporarily unavailable until database recovery completes for that set of data, which is very quick. This is an advantage over some competing solutions in which an entire system freeze occurs as part of the database recovery process.
  • Reduced total cost of ownership: The DB2 pureScale interfaces easily handle the deployment and maintenance of components integrated within the DB2 pureScale Feature. This helps reduce what might amount to steep learning curves associated with deploying and maintaining with some competing technologies.

The DB2 pureScale Feature provides a local high-availability solution, while solving many other customers business scenarios like:

  • The need to have consistent backups of dynamically changing data.
  • The need to have consistent copies of production data to facilitate data movement or migration between hosts.
  • The need to have consistent copies of production data to facilitate application development and testing requirements.
  • The need to have copies of production data sets for auditing, quality assurance, and data mining purposes.

The DB2 pureScale feature leverages the IBM Storwize V7000 storage system FlashCopy services to enable the above customer requirements in an extremely efficient manner.

The FlashCopy function enables you to make point-in-time, full-volume copies of data, with the copies immediately available for read or write access.

FlashCopy creates a copy of a source volume on the target volume. This copy is called a point-in-time copy. When you initiate a FlashCopy operation, a FlashCopy relationship is created between a source volume and target volume. A FlashCopy relationship is a mapping of the FlashCopy source volume and a FlashCopy target volume. This mapping allows a point-in-time copy of that source volume to be copied to the associated target volume. The FlashCopy relationship exists between this volume pair from the time you initiate a FlashCopy operation until the storage unit copies all data from the source volume to the target volume or you delete the FlashCopy relationship, if it is a persistent FlashCopy.

When the data is physically copied, a background process copies tracks from the source volume to the target volume. The amount of time it takes to complete the background copy depends on the amount of data being copied or the number of background copy processes occurring or the other activities occurring on the storage server. Copy rate can also be adjusted at FlashCopy creation time or modified later.

This article describes the steps required to deploy a FlashCopy solution with the Storwize V7000 storage subsystem. FlashCopy feature can also be deployed using FlashCopy technology from other storage vendors.


Understanding the solution configuration

In the example FlashCopy solution presented in this article, a DB2 pureScale instance is active and online at the primary server. Transactions are running against the database at the primary server, while FlashCopy synchronously copy the data to the target server. The copies are performed at the block level; it operates below the host operating system and cache and is, therefore, not apparent to the host.

If a complete failure occurs on the primary server, the DB2 pureScale instance at the target server can be brought online to handle application requests while the primary server is unavailable. Figure 1 shows a basic topology of the configuration used in this example for FlashCopy configuration.

Figure 1. Topology overview
Topology overview
  • An identical DB2 instance called db2inst1 is created at the primary server and the target server.
  • The DB2 instance at each site consists of two members and these two CFs.
  • A Storwize V7000 storage subsystem is deployed at primary server. In Figure 1, it is logically shown as two V7000 servers, but it is only one server shared between primary server and target servers.
  • hdisk6 stores instance files in the home directory of the DB2 instance owner. This disk is not mirrored across servers.
  • hdisk7 is leveraged by DB2 pureScale cluster services. This disk is not mirrored across servers.
  • hdisk8 stores the data associated with a DB2 database. The file system created on this disk is called /db2fs/db2datafs. This disk is mirrored across servers.
  • hdisk9 stores the transaction logs associated with a DB2 database. The file system created on this disk is called /db2fs/db2logfs. This disk is mirrored across servers.
  • A storage-level consistency group is defined across hdisk8 and hdisk9 to guarantee that a single-point-in-time copy of the data is seen at the target site upon any failure.

Deploying the DB2 pureScale Feature with FlashCopy

The following steps explain how to deploy the FlashCopy functionality with the DB2 pureScale Feature so hdisk8 and hdisk9 are being mirrored from the primary server to the target server.

Step 1: Create the consistency group

From the V7000 command-line interface, first create the consistency group for the FlashCopy. This will ensure a consistent point-in-time copy across all disks being mirrored. Here, you can create two consistency groups: one (GRP1) is from the primary server to the target server, and the other one (GRP1-REVERSE) is from target to primary.

Listing 1. Syntax for create consistency group
#Create Consistency Group for flashcopy
V7000>  svctask mkfcconsistgrp -name GRP1
                
#Create Consistency Group for restore
V7000> svctask mkfcconsistgrp -name GRP1-REVERSE

Step 2: Create the FlashCopy mappings

Create the FlashCopy mappings that will map a source disk to a target disk for subsequent copying, then add to the consistency group.

# From source vdisk (primary-s1) to target vdisk (primary-t1)
V7000> svctask mkfcmap -cleanrate 50 -consistgrp GRP1 -copyrate 50 -incremental -source   
primary-s1 -target primary-t1 -name db2fcmap0

Where:

  • primary-s1 is the name of the source VDisk.
  • primary-t1 is the name of the VDisk you want to make the target VDisk.
  • db2fcmap0 is the name of the FlashCopy mapping.
  • GRP1 is the consistency group name.
  • 50 is the clean rate.
  • 50 is the copy rate, and the FlashCopy mapping is incremental, meaning that in an incremental FlashCopy, the initial mapping copies all the data from the source volume to the target volume.
  • Subsequent FlashCopy mappings only copy data modified since the initial FlashCopy mapping. This action reduces the amount of time it takes to re-create an independent FlashCopy image.
# From target vdisk (primary-t1) to source vdisk (primary-s1)
V7000> svctask mkfcmap -cleanrate 50 -consistgrp GRP1-REVERSE -copyrate 50 -incremental 
-source primary-t1 -target primary-s1 -name db2fcmap1

Where:

  • primary-t1 is the name of the source VDisk.
  • primary-s1 is the name of the VDisk you want to make the target VDisk.
  • db2fcmap1 is the name of the FlashCopy mapping.
  • GRP1-REVERSE is the consistency group name.
  • 50 is the clean rate.
  • 50 is the copy rate and the FlashCopy mapping is incremental.

Step 3: Check the status of the FlashCopy mapping

Listing 2 will check the attributes of the FlashCopy mappings created in step 2. From the below result, progress column will indicate what percentage has been copied to the target disk.

Listing 2. Syntax to check FlashCopy mappings
V7000> svcinfo lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id 
0  db2fcmap0 0               primary-s1        4               primary-t1        1        
1  db2fcmap1 4               primary-t1        0               primary-s1        2        
2  db2fcmap2 5               primary-log       6               primary-log-t1    1        
3  db2fcmap3 6               primary-log-t1    5               primary-log       2        
            
group_name status progress copy_rate clean_progress incremental
GRP1  		    idle_or_copied 0        50        100   on                      
GRP1-REVERSE    idle_or_copied 0        50        100   on                      
GRP1   		    idle_or_copied 0        50        100   on                      
GRP1-REVERSE    idle_or_copied 0        50        100   on                      
            
partner_FC_id 	partner_FC_name restoring start_time   rc_controlled
1		        db2fcmap1       no         	       no          
0		        db2fcmap0       no                     no          
3		        db2fcmap3       no         	       no      
2		        db2fcmap2       no                     no

Step 4: Creating the db2inst1 instance on the primary server

Create the db2inst1 instance on the primary server with two DB2 members and two CFs. The steps below show how to do this using the command line, but you can complete the same task from the DB2 graphical installation.

# db2icrt -d instance_shared_dev /dev/hdisk6 –tbdev /dev/hdisk7 -m hosta:hosta- ib0 
-cf hostc:hostc-ib0 – u db2inst1 db2inst1
# db2iupdt -d -add -m hostb:hostb-ib0 db2inst1
# db2iupdt -d -add -cf hostd:hostd-ib0 db2inst1

Step 5: Creating the db2inst1 instance on the target server

Create the db2inst1 instance on the target server with two DB2 members and two CFs as shown below.

# db2icrt -d instance_shared_dev /dev/hdisk6 -tbdev /dev/hdisk7 -m hostw:hostw-ib0 
-cf hosty:hosty-ib0 – u db2inst1 db2inst1
# db2iupdt -d -add -m hostx:hostx-ib0 db2inst1
# db2iupdt -d -add -cf hostz:hostz-ib0 db2inst1

Note that any DB2 configuration parameter changes made at the primary server must be manually applied to the target server. Also, any required files that do not reside on the mirrored disks, including binary files from the creation of stored procedures, must be manually copied to the target server.

Step 6: Creating file system at the primary server

Create the file system for the DB2 data and transaction logs at the primary server, and then change the owner to the DB2 instance owner, as shown below.

Listing 3. Syntax for creating DB2 file system
# db2cluster -cfs -start –all
# db2cluster -cfs -create -filesystem db2datafs -disk /dev/hdisk8
# db2cluster -cfs -create -filesystem db2logfs -disk /dev/hdisk9
# chown db2inst1:db2adm /db2fs/db2datafs
# chown db2inst1:db2adm /db2fs/db2logfs

Step 7: Configure RSH and SSH

Configure password less RSH, SSH for root, and the user (db2inst1) for computers between the primary server and the target server.

Step 8: Synchronize the file system definition

From the primary server to the target server, synchronize the file system as shown below. Execute the command from target server (from hostw). This synchronization only needs to occur when the definition of the file system changes.

$ db2stop force
# cd /usr/lpp/mmfs/bin/mmshutdown –a

Step 8a: Creating remotenodefile

Create a file called remotenodefile with the entries as listed below.

hostw
hostx 
hosty 
hostz

Step 8b: Exporting the file system definitions

Run the commands as shown below to export the file system definitions from the primary server to the target server.

# /usr/lpp/mmfs/bin/mmfsctl db2datafs syncFSconfig -n remotenodefile
# /usr/lpp/mmfs/bin/mmfsctl db2logfs syncFSconfig -n remotenodefile

Step 9: Validating the propagation of the file system definitions

Validate the propagation of the file system definitions to the target server by running the commands as shown below from hostw. The mmlsnsd command shows that the file system definitions for the file systems db2datafs and db2logfs now exist on the target server. Once the validation is complete, stop the file system.

# /usr/lpp/mmfs/bin/mmstartup -a
# /usr/lpp/mmfs/bin/mmlsnsd
# /usr/lpp/mmfs/bin/mmshutdown –a

On the primary server, an application workload can now run against the database.


Database backup

Customers have large databases to accompany their companies' growth. But the size of the database prevents the customers from running the database system. That requires fast database backup without affecting the database and applications. They require the high availability even in a disaster situation.

If you would rather not back up a large database using the DB2 backup utility, you can make copies from a mirrored image by using the DB2 write suspend feature and FlashCopy features from the IBM Storwize V7000 storage subsystem.

Four typical scenarios are discussed below, where a customer performs a database backup. Using the IBM Storwize V7000 FlashCopy feature, it is possible to make dozens of copies in different configurations with virtually no impact to the production database.

  • Scenario 1: Snapshot database scenario
  • Scenario 2: Standby database scenario
  • Scenario 3: Split-Mirror online backup scenario
  • Scenario 4: Offload database scenario

While these backup scenarios performed as expected under significant load, it is recommended that the backup be performed at times of minimal load (such as during the night).


Scenario 1: Snapshot database scenario

In this scenario, a point-in-time view of the production data will be created. The snapshot is not intended to be an independent copy but is used to maintain a view of the production data at the time the snapshot is created. The steps for performing the backup:

  1. From the target server host hostw, stop the standby database, stop the cluster manager on all the hosts, and unmount both the file systems (db2datafs, db2logfs).
    Listing 4. Syntax for stopping DB2 and GPFS
    $ db2stop
    # db2cluster -cm -stop -host hostw hostx hosty hostz –force
    # /usr/lpp/mmfs/bin/mmumount db2datafs –a
    # /usr/lpp/mmfs/bin/mmumount db2logfs  –a
  2. From the primary server host hosta, flush the file system buffers.
    $ db2 flush bufferpools all
  3. Quiesce the primary server database write I/O from one of the member.
    $ db2 set write suspend for database
  4. Suspend write I/O on GPFS volumes, while it allows the read access.
    # /usr/lpp/mmfs/bin/mmfsctl db2datafs suspend-write
    # /usr/lpp/mmfs/bin/mmfsctl db2logfs suspend-write
  5. Performing the FlashCopy operation on the primary server host:
    Listing 5. Syntax for FlashCopy operation
    i. Prepare the consistency group for flashcopy
    V7000> svctask prestartfcconsistgrp GRP1
                            
    ii. Confirm consistency group status has changed to 'prepared'
    V7000> svcinfo lsfcconsistgrp
    id 		name    		     status
    1  		GRP1		              prepared
    2  		GRP1-REVERSE         idle_or_copied
    
    iii. Start Flashcopy from Source to Target
    V7000> svctask startfcconsistgrp GRP1
                            
    iv. Query the status of the copy process
    V7000> svcinfo lsfcmap
    id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id 
    0  db2fcmap0 0     primary-s1        4      primary-t1        1            
    1  db2fcmap1 4     primary-t1        0      primary-s1        2            
    2  db2fcmap2 5     primary-log       6      primary-log-t1    1        
    3  db2fcmap3 6    primary-log-t1     5      primary-log       2     
    
    group_name status progress  copy_rate clean_progress incremental 
    GRP1         copying        1        100       100            on     
    GRP1-REVERSE idle_or_copied 0        100       100            on     
    GRP1         copying        2        100       100            on      
    GRP1-REVERSE idle_or_copied 0        100       100            on     
                            
    partner_FC_id partner_FC_name restoring start_time   rc_controlled
    1             db2fcmap1       no   120614151729      no      
    0             db2fcmap0       no                     no      
    3             db2fcmap3       no   120614151729      no      
    2             db2fcmap2       no                     no

    The progress column indicates the percentage of copy operation completed. From the above output, you can see that primary server data has been completed 1 percent, while the log is 2 percent finished. Progress will vary depending on the volume size and copy duration.

  6. A copy of data is now available. Resume the write I/O on GPFS volume.
    # /usr/lpp/mmfs/bin/mmfsctl db2datafs resume
    # /usr/lpp/mmfs/bin/mmfsctl db2logfs resume
  7. Resume the database from the primary server write I/O using the same connection from the step 3.
    $ db2 set write resume for database
  8. From the target server, start GPFS on all the hosts and mount both the file systems (db2datafs, db2logfs).
    # /usr/lpp/mmfs/bin/mmstartup –a
    # /usr/lpp/mmfs/bin/mmmount db2datafs –a
    # /usr/lpp/mmfs/bin/mmmount db2logfs  –a
  9. The first time the database is accessed at the target server, catalog the database by user db2inst1 so that the instance on the target server can see it.
    $ db2 catalog database database_alias on /db2fs/db2datafs
  10. Start DB2 on the target server with the db2inst1 user ID and initialize the copy of the suspended database through the db2inidb command.
    $ db2start
    $ db2inidb database_alias as SNAPSHOT

    The database at the target server is now fully accessible and contains all committed updates that occurred on the primary server up to the most recent log records available on the standby system.


Scenario 2: Standby database scenario

In this scenario, a point-in-time replica of the production data will be created. After the copy completes, the target view can be refreshed from the production data, with minimal copying of data from the production volume to the backup volume.

The steps for performing the backup are as follows:

  1. From the target server host hostw, stop the standby database, stop the cluster manager on all the hosts and unmount both the file systems (db2datafs, db2logfs).
    Listing 6. Syntax for stopping DB2 and GPFS
    $ db2stop
    # db2cluster -cm -stop -host hostw hostx hosty hostz –force
    # /usr/lpp/mmfs/bin/mmumount db2datafs –a
    # /usr/lpp/mmfs/bin/mmumount db2logfs  –a
  2. From the primary server host hosta, flush the file system buffers.
    $ db2 flush bufferpools all
  3. Quiesce the primary server database write I/O from one of the member.
    $ db2 set write suspend for database
  4. Suspend write I/O on GPFS volumes, while it allows the read access.
    # /usr/lpp/mmfs/bin/mmfsctl db2datafs suspend-write
    # /usr/lpp/mmfs/bin/mmfsctl db2logfs suspend-write
  5. Performing the FlashCopy operation on the primary server host:
    Listing 7. Syntax for FlashCopy operation
    i. Prepare the consistency group for flashcopy
    V7000> svctask prestartfcconsistgrp GRP1
                            
    ii. Confirm consistency group status has changed to ‘prepared’
    V7000> svcinfo lsfcconsistgrp
    id 		name    		     status
    1  		GRP1		              prepared
    2  		GRP1-REVERSE         idle_or_copied
                            
    iii. Start Flashcopy from Source to Target
    V7000> svctask startfcconsistgrp GRP1
                            
    iv. Query the status of the copy process
    V7000> svcinfo lsfcmap
    id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id 
    0  db2fcmap0 0     primary-s1        4      primary-t1        1           
    1  db2fcmap1 4     primary-t1        0       primary-s1        2           
    2  db2fcmap2 5     primary-log       6      primary-log-t1    1        
    3  db2fcmap3 6     primary-log-t1    5    primary-log       2        
                            
    group_name status progress  copy_rate clean_progress incremental 
    GRP1         copying        2        50       100            on      
    GRP1-REVERSE idle_or_copied 0        50       100            on      
    GRP1         copying        3        50       100            on        
    GRP1-REVERSE idle_or_copied 0        50       100            on      
                            
    partner_FC_id partner_FC_name restoring start_time   rc_controlled
    1             db2fcmap1       no        120614153011 no       
    0             db2fcmap0       no                     no       
    3             db2fcmap3       no        120614153011 no      
    2             db2fcmap2       no                     no

    The progress column indicates the percentage of the copy operation completed. From the above output, you can see that primary server data has been completed 2 percent, while the log is 3% percent finished.

  6. A copy of data is now available. Resume the write I/O on GPFS volume.
    # /usr/lpp/mmfs/bin/mmfsctl db2datafs resume
    # /usr/lpp/mmfs/bin/mmfsctl db2logfs resume
  7. Resume the database from the target server write I/O using the same connection from the step 3.
    $ db2 set write resume for database
  8. From the target server, start GPFS on all the hosts and mount both the file systems (db2datafs, db2logfs).
    # /usr/lpp/mmfs/bin/mmstartup –a
    # /usr/lpp/mmfs/bin/mmmount db2datafs –a
    # /usr/lpp/mmfs/bin/mmmount db2logfs  –a
  9. The first time the database is accessed at the target server, catalog the database by user db2inst1 so the instance on the target server can see it.
    $ db2 catalog database database_alias on /db2fs/db2datafs
  10. Start DB2 on the target server with the db2inst1 user ID and initialize the copy of the suspended database through the db2inidb command.
    $ db2start
    $ db2inidb database_alias as STANDBY
  11. Perform a roll forward of the database to end of log or specific point in time. To take the standby database out of a roll-forward state, use the stop option in the roll-forward command.
    $ db2 rollforward database database_alias to end of logs and stop

    The database at the target server is now fully accessible and contains all committed updates that occurred on the primary server up to the most recent log records available on the standby system.


Scenario 3: Split-mirror online backup scenario

If the primary database fails and becomes unusable, the FlashCopy volume from the target server can be copied back to the primary volume except for the database log files. This scenario uses reverse FlashCopy from the target server volume to primary server volume. Initializing the copied database as a mirror places the database in roll-forward pending state. Then DB2 performs roll-forward recovery with the old log files as they existed in the primary server host before the database failure.

The steps for performing the backup are as follows:

  1. From the primary server host hosta, stop the primary database, stop the cluster manager on all the hosts and unmount both the file systems (db2datafs, db2logfs).
    Listing 8. Syntax for stopping DB2 and GPFS
    $ db2stop
    # db2cluster -cm -stop -host hosta hostb hostc hostd –force
    # /usr/lpp/mmfs/bin/mmumount db2datafs –a
    # /usr/lpp/mmfs/bin/mmumount db2logfs  –a
  2. From the target server host hostw, flush the file system buffers: $ db2 flush bufferpools all.
  3. Quiesce the target server database for write I/O from one of the member. $ db2 set write suspend for database.
  4. Suspend write I/O on GPFS volumes, while it allows the read access: # /usr/lpp/mmfs/bin/mmfsctl db2datafs suspend-write # /usr/lpp/mmfs/bin/mmfsctl db2logfs suspend-write
  5. Performing the reverse FlashCopy operation:
    Listing 9. Syntax for FlashCopy operation
    1	Prepare the consistency group for flashcopy
    V7000> svctask prestartfcconsistgrp GRP1-REVERSE
                            
    2	Confirm consistency group status has changed to ‘prepared’
    V7000> svcinfo lsfcconsistgrp
    id 		name    	 		status
    1  		GRP1			 idle_or_copied    
    2  		GRP1-REVERSE   	 prepared
    
    3	Start reverse Flashcopy from Target to Source
    V7000> svctask startfcconsistgrp GRP1-REVERSE
                            
    4	Query the status of the copy process
    V7000> svcinfo lsfcmap
    id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id 
    0  db2fcmap0 0    primary-s1        4      primary-t1        1                
    1  db2fcmap1 4    primary-t1        0       primary-s1        2               
    2  db2fcmap2 5    primary-log       6      primary-log-t1    1           
    3  db2fcmap3 6    primary-log-t1    5    primary-log       2        
                            
    group_name status progress  copy_rate clean_progress incremental 
    GRP1         idle_or_copied        100      50       100   on       
    GRP1-REVERSE  copying	98         50       100            on         
    GRP1         idle_or_copied        100      50       100   on         
    GRP1-REVERSE copying	99         50       100            on         
                            
    partner_FC_id partner_FC_name restoring start_time   rc_controlled
    1             db2fcmap1       no        120614153011 no  
    0             db2fcmap0       no        120614162110 no   
    3             db2fcmap3       no        120614153011 no   
    2             db2fcmap2       no        120614162110 no

    The reverse copy operation only copies back the changes, so progress depends on how much data has been modified since the last FlashCopy. In this example, little data had been changed since the last FlashCopy, so reverse FlashCopy is quick.

  6. A copy of data is now available. Resume the write I/O on GPFS volume:
    # /usr/lpp/mmfs/bin/mmfsctl db2datafs resume
    # /usr/lpp/mmfs/bin/mmfsctl db2logfs resume
  7. Resume the database from the target server write I/O using the same connection from the step 3: $ db2 set write resume for database .
  8. From the primary server, start GPFS on all the hosts and mount both the file systems (db2datafs, db2logfs):
    # /usr/lpp/mmfs/bin/mmstartup –a
    # /usr/lpp/mmfs/bin/mmmount db2datafs –a
    # /usr/lpp/mmfs/bin/mmmount db2logfs  –a
  9. Catalog the database by user db2inst1 so the instance on the primary server can see it: $ db2 catalog database database_alias on /db2fs/db2datafs.
  10. Start DB2 on the primary server with the db2inst1 user ID and initialize the copy of the suspended database through the db2inidb command.
    $ db2start
    $ db2inidb database_alias as MIRROR
  11. Perform a roll-forward of the database to end of log or specific point in time. To take the standby database out of a roll-forward state, use the stop option in the roll-forward command: $ db2 rollforward database database_alias to end of logs and stop. The database at the primary server is now fully accessible, and all applications on the primary server can start now.

Scenario 4: Offload database scenario

A DB2 backup can be performed on the target server database. The DB2 backup can be restored on the primary server or on another server. You can then roll-forward the database to a particular point in time or until the end of the logs is reached.

The steps for performing the backup:

  1. From the primary server host hosta, stop the primary database, stop the cluster manager on all the hosts, and unmount both the file systems (db2datafs, db2logfs).
    Listing 10. Syntax for stopping DB2 and GPFS
    $ db2stop
    # db2cluster -cm -stop -host hosta hostb hostc hostd –force
    # /usr/lpp/mmfs/bin/mmumount db2datafs –a
    # /usr/lpp/mmfs/bin/mmumount db2logfs  –a
  2. From the target server host hostw, flush the file system buffers: $ db2 flush bufferpools all.
  3. Quiesce the target server database for write I/O from one of the member: $ db2 set write suspend for database.
  4. Suspend write I/O on GPFS volumes, while it allows the read access.
    # /usr/lpp/mmfs/bin/mmfsctl db2datafs suspend-write
    # /usr/lpp/mmfs/bin/mmfsctl db2logfs suspend-write
  5. Perform the reverse FlashCopy operation:
    Listing 11. Syntax for FlashCopy operation
    Performing the reverse flash copy operation:
    i.	Prepare the consistency group for flashcopy
    V7000> svctask prestartfcconsistgrp GRP1-REVERSE
                            
    ii.	Confirm consistency group status has changed to ‘prepared’
    V7000> svcinfo lsfcconsistgrp
    id 		name    	 		status
    1  		GRP1			 idle_or_copied    
    2  		GRP1-REVERSE   	 prepared
                            
    iii.	Start reverse Flashcopy from Target to Source
    V7000> svctask startfcconsistgrp GRP1-REVERSE
                            
    iv.	Query the status of the copy process
    V7000> svcinfo lsfcmap
                            
    id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id 
    0  db2fcmap0 0    primary-s1        4      primary-t1        1                  
    1  db2fcmap1 4    primary-t1        0       primary-s1        2                 
    2  db2fcmap2 5    primary-log       6      primary-log-t1    1             
    3  db2fcmap3 6    primary-log-t1    5    primary-log       2        
                            
    group_name status progress  copy_rate clean_progress incremental 
    GRP1         idle_or_copied        100       50       100  on        
    GRP1-REVERSE copying	99         50       100            on         
    GRP1         idle_or_copied        100       50       100  on          
    GRP1-REVERSE copying	99         50       100            on         
                            
    partner_FC_id partner_FC_name restoring start_time   rc_controlled
    1             db2fcmap1       no        120614153011   no
    0             db2fcmap0       no        120614164032   no 
    3             db2fcmap3       no        120614153011   no 
    2             db2fcmap2       no        120614164032   no

    The progress column indicates the percentage of copy operation completed. From the above output, you can see that primary server and log are 99-percent copied.

    The reverse-copy operation only copies back the changes, so progress depends on how much data has been modified since the last FlashCopy. In this example, little data had been changed since the last FlashCopy, so reverse FlashCopy is quick.

  6. A copy of data is now available. Resume the write I/O on GPFS volume.
    # /usr/lpp/mmfs/bin/mmfsctl db2datafs resume
    # /usr/lpp/mmfs/bin/mmfsctl db2logfs resume
  7. Resume the database from the target server write I/O using the same connection from the step 3. $ db2 set write resume for database .
  8. From the primary server, start GPFS on all the hosts and mount both the file systems (db2datafs, db2logfs).
    # /usr/lpp/mmfs/bin/mmstartup –a
    # /usr/lpp/mmfs/bin/mmmount db2datafs –a
    # /usr/lpp/mmfs/bin/mmmount db2logfs  –a
  9. Catalog the database by user db2inst1 so that the instance on the primary server can see it: $ db2 catalog database database_alias on /db2fs/db2datafs.
  10. Start DB2 on the primary server with the db2inst1 user ID and initialize the copy of the suspended database through the db2inidb command:
    $ db2start
    $ db2inidb database_alias as STANDBY
  11. Perform a roll-forward of the database to end of log or specific point in time. To take the standby database out of a roll-forward state, use the stop option in the roll-forward command: $ db2 rollforward database database_alias to end of logs and stop. The database at the primary server is now fully accessible, and all applications on the primary server can start now.

Conclusion

The DB2 pureScale Feature for Enterprise Server Edition provides a database solution that meets the needs of the most demanding users. It is designed for high availability to allow for business continuity in the event of planned and unplanned outages. You can deploy the DB2 suspend I/O feature along with the IBM Storwize V7000 FlashCopy solution to allow customers to continue to run their businesses even in the unfortunate circumstance of a primary server failure.

Resources

Learn

Get products and technologies

  • Build your next development project with IBM trial software, available for download directly from developerWorks.
  • Now you can use DB2 for free. Download DB2 Express-C, a no-charge version of DB2 Express Edition for the community that offers the same core data features as DB2 Express Edition and provides a solid base to build and deploy applications.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=853450
ArticleTitle=Configuring DB2 pureScale for backup and restore using Storwize V7000 V6.4.0.1 FlashCopy
publish-date=01032013