The database migration in DB2 is a simple task when it comes to a single partition database environment. However, there are additional decisions you need to make when it comes to a large data warehouse environment that has terabytes of data on a Database Partition Feature (DPF). This article looks at the data migration options available in DB2 and also takes a deep look into the procedure for the data module migration from BCU D5100 to ISAS 5600 R2.
Understanding the systems
The DBAs and the database architects need to understand the source and target server configurations before planning the migration activity. Figure 1 shows the DB2 elements of an Intel-based source D5100 v2 system:
Figure 1. D5100 Balanced Warehouse node configuration
Figure 2 shows the DB2 elements of an Intel-based target ISAS 5600 R2 system:
Figure 2. 5600 R2 IBM Smart Analytics System node configuration
From the database administration and migration perspective, the major difference between the two systems is the number of database partitions. Table 1 lists the database partition group distribution among the two systems and post-migration change requirements:
Table 1. Database partition group distribution
|Database partition group||BCU D5100 distribution||ISAS 5600 distribution||Post migration|
|IBMDEFAULTGROUP||41 (NODE0000-NODE0040)||65 (NODE0000-NODE0064)||Partition group expansion is required|
|IBMCATGROUP||1 (NODE0000)||1 (NODE0000)||No change to the system catalog partition group|
|IBMTEMPGROUP||41 (NODE0000-NODE0040)||65 (NODE0000-NODE0064)||Partition group expansion is not required, however you need to add the container paths|
|DWECONTROLGROUP||1 (NODE0000)||1 (NODE0000)||There are no changes|
|SDPG||1 (NODE0000)||1 (NODE0000)||There are no changes|
|PDPG||41 (NODE0000-NODE0040)||65 (NODE0000-NODE0064)||Partition group expansion and the container path creation are required|
Effective database migration strategy
Organizations consider a migration plan effective and robust when there is minimal business impact of the tasks, such as minimal planned outages, zero data integrity problems, and optimized resource cost. Figure 3 portrays the process and associated broader steps for a successful database migration:
Figure 3. Effective database migration strategy
Analyze is the first stage in the database migration, and this phase broadly includes understanding the following components of the source and target systems:
- The database node differences and the migration options (shown in Table 2)
- The Tivoli System Automation for Multiplatform configuration
- The business requirements
- The staging area requirements
- The data integrity check requirement
- The database migration process
Table 2. Homogeneous database migration options
|1||Database restore||A well-known migration strategy among homogeneous systems||The challenge here is with the difference in the number of logical partitions between the source and target systems. The source BCU system has 41 database partitions and the target ISAS system has 65 database partitions. Hence, it is not just a straight database restore.|
|2||IBM InfoSphereOptim High Performance Unload (HPU)||High-speed automatic repartition and load with minimal DBA intervention||You have to buy the HPU license in both the source and target systems.|
|3||In-house data migration scripts||Easy way of migrating the data from source to destination||The challenge here is the DBA needs to have a complete understanding of the system and business needs.|
In the Prepare phase, you start creating the migration plan document and migration scripts, and you prepare the system integration diagram to perform the application interface post migration job tests.
In the Validate phase, you test the end-to-end process in a UAT environment and record the execution time for each step along with success/failure messages. In case of any failure, record the process to fix it. The key element in this phase is to cross-verify each step’s results against the acceptance criteria.
In the Implement phase, you freeze the source server application activities by bringing the interfaces and processing jobs down. Perform the migration; upon successful completion of the migration, check all the interfaces against the acceptance criteria.
Being a market leader in the FMCG space, Reckitt Benckiser wanted to keep the system outage to a minimum (in other words, less than 4 hours for reporting capabilities and 20 hours for data loading capabilities). They decided to use Option 1 to perform the migration to meet the business needs. The migration steps are discussed in the next section in greater detail.
A step-by-step migration procedure
Considering the steps from all the phases of the migration strategy, Table 3 drills down into the database migration option of a database restore from BCU to ISAS, successfully eliminating the node differences between the systems:
Table 3. Migration procedure
|1||BCU||Stop ETL and Reporting Jobs||Non DB2 commands|
|2||BCU||Quiesce the db2 instance|
|3||BCU||Deactivate the database|
|4||BCU||Create soft links from the admin node export directory to all the data nodes|
|5||BCU||Offline database backup to local disk||Catalog Partition: |
Non Catalog Partition in parallel:
|6||BCU||Validate the database backup images||You can use the following script to trigger check backup command on all the partitions in parallel |
Validate the checkout files and make sure all the backups are fine.
|7||BCU||Copy the backup images over to ISAS||Use scp or sftp command within the script to copy the database backup images from source to target server|
|9||ISAS||Disable High Availability Component TSA|
|10||ISAS||Back up all the pre-change configurations from the target system:
|11||ISAS||Drop the existing database shell and stop the instance|
|12||ISAS||Reduce the number of database partitions to 41 and start the instance|
|13||ISAS||Apply the pre-restore configuration changes||Update the database manager configuration parameter DIAGLEVEL to 4 and HEALTH_MON to OFF.|
|14||ISAS||Restore database partition 0 and then remaining 40 database partitions NODE0001 to NODE0040||Catalog Partition: |
Non Catalog Partition in parallel:
|15||ISAS||Activate database and do preliminary checks|
check all the table space states, table data accessibility
|16||ISAS||Identify Materialized Query Tables (MQT), extract the DDLs, and drop them|
|18||ISAS||Prune the recovery history file and deactivate the database|
|19||ISAS||Add new database partitions 41 to 64|
08/06/2013 21:04:26 8 0 SQL1489I The add database partition server operation was successful. The new database partition server "41" is active
SQL1489I The add database partition server operation was successful. The new database partition server "41" is active.
Note: Suggest to do a full db2stop and db2start after adding the first partition on each new server, in this case after adding 41, 49 and 57 partitions.
|20||ISAS||Confirm that the new database partitions have been added to the instance by checking the db2nodes.cfg file||
You should now see nodes 41 to 64 in the db2nodes.cfg nodes configuration file|
0 isasadm001 0 isasadm001
1 isasdata001 1 isasdata001
2 isasdata001 2 isasdata001
63 isasdata008 6 isasdata008
64 isasdata008 7 isasdata008
|21||ISAS||Add temporary table space containers on the newly added nodes|
SQL1759W Redistribute database partition group is required to change database partitioning for objects in database partition group "PDPG" to include some added database partitions or exclude some dropped database partitions. SQLSTATE=01618
|23||ISAS||Add container paths to the extended |
|24||ISAS||Take an offline backup of database BCUDB to the local disk||This step is necessary to save all the work you have done until step 24. These backup images can be used to restore in case of any problems during the data redistribution step and could save many hours|
|25||ISAS||Data Re-distribution Pre-Requisites|
|26||ISAS||Redistribute the data across expanded database partitions for the partition groups |
|27||ISAS||Take an offline backup of database BCUDB to the local disk||This backup is just a place holder, and can be ignored. However it is recommended to take a copy of the latest redistributed database backup.|
|28||ISAS||Perform the post-redistribution tasks|
|29||ISAS||Take the first backup of database to TSM|
If you can’t facilitate as many TSM mount points as required, backup the database in chunks using
|30||ISAS||Enable High Availability component TSA and connect applications|
In case you are trying to migrate a database from a higher number of database partition source systems to a small number of database partition target systems, add additional numbers of partitions to the target system at Step 12 and drop the partition after a successful database restore using the following set of commands at Step 20:
- Redistribute data uniformly across the other partitions from the partition (x)
where x is the databases partition you intend to delete:
REDISTRIBUTE DATABASE PARTITION GROUP PDPG UNIFORM DROP DBPARTITIONNUM (40);
- Force all the application connections and verify:
FORCE APPPLICATIONS ALL;
- Perform the partition drop verify before dropping it:
TERMINATE; DROP DBPARTITIONNUM VERIFY;
SQL6034W Database partition "40" is not being used by any databases.
If the intended partition is in use, you will get
SQL6035Wwarning message. In such cases, db2diag.log is your guide to know what is stopping the drop dbpartitionnum command to complete successfully.
- Actual database partitions delete statement:
TERMINATE;STOP DATABASE MANAGER DROP DBPARTITIONNUM 40;
SQL6076W Warning! This command will remove all database files on the node for this instance. Before continuing, ensure that there is no user data on this node by running the DROP NODE VERIFY command.
Do you want to continue ? (y/n) y
You can also follow the process of decreasing the source database partition numbers to that of the target server; then take the backup and restore it over the target system. However remember, this leads to a much bigger outage compared to the earlier demonstrated process.
This article describes how to use the DB2's database restore and DPF scale-out techniques to migrate terabytes database from BCU to ISAS meeting business requirements. You should now be able to tweak the steps to address your database migration requirements irrespective of the number of nodes and the configuration.
Special thanks to some wonderful people for helping me complete the article:
- Colin A Chapman (DB2 Expert, IBM UK)
- Simon Woodcock (Data Warehouse Expert, IBM Hursley Lab UK)
- Nelson Coish (President, Coish Consulting Inc, Canada)
- John Rutter (Systems Analyst, Reckitt Benckiser UK)
- Karthikraj Kanagaraj (Sify Technologies, UK)
- Matt Gee (SBA, Reckitt Benckiser, UK)
- "Expanding an IBM Smart Analytics System database and redistributing data": DB2 for Linux, UNIX, and Windows Best Practices.
- Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools as well as IT industry trends.
- Follow developerWorks on Twitter.
- Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers.
Get products and technologies
- Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the DB2 Technology Preview learning what's new in DB2 10.5.
- Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
Dig deeper into Information management on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Keep up with the best and latest technical info to help you tackle your development challenges.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.