Tackling technical risk in database migrations

Share this post:

Technical risk is one of the most common migration worries organizations face, and the potential technical risks vary depending on the type of migration.

So far, this series has covered custom code application migrations as well as migrating ISV applications (see part 1 and part 2). Now I want to discuss technical risk for database migrations, with input from my IBM Systems Lab Services Migration Factory colleagues Rick Murphy and Mark Short.

Migrating databases

To get started, there are two guiding principles that reduce risk:

  1. Never touch the source database: We never modify the source database during a migration. During the go-live weekend, we shut down the source applications and start the cut-over to the new target. If a problem arises, we simply stop the migration and restart the production database on the source system. This gives the client a solid feeling that a back-out plan will work if needed.
  2. Isolate the migration from the client’s day-to-day operations: We use copies of the production instances, called staging servers, for most of our test migrations. This isolates the migration from the client’s day-to-day operations—so no outages are required on the client’s production environment and there is no chance of the migration process bringing down the client’s business.

Most database migrations are relatively low-risk projects, but clients will always worry about losing and/or corrupting valuable data during the migration process. We will talk a bit later about how we guard against that happening, but first we want make some general comments about database migrations.

We’ll be talking about cross-platform migrations in this blog post. That’s where the hardware vendor and operating system change. It is also possible to change the database management system during cross-platform migrations. For example:

Source Database Source Platform Target Database Target Platform Comments


Oracle Sparc/Solaris Oracle Power/AIX

Source and target platform changed and database remains the same


Oracle x/86/Linux DB2 Power/AIX Source and target platforms changed as does the database


Database upgrades can also be part of a cross-platform migration. They’re not the same type of upgrade you’d do if the source and target platforms didn’t change, even though some clients call this process a migration (that is, migrating from Oracle 10.x to Oracle 12.c).

We use automated scripts to assist with the collection of key database metrics and a variety of IBM and vendor-supplied tools to assist with the migration process. The XenoBridge database migration tool is an IBM asset, and our preferred tool for most migrations except for those where the database is too large and cannot be migrated within the allotted outage window. In those situations, we have several replication tools that can be used.

XenoBridge has many built-in features to ensure data integrity during the migration and provides a validation report when the migration is complete, showing that all of the data from the source database has been successfully migrated to the new target database.

XenoBridge can also migrate from an older database version to a newer version as part of the migration process. For example, migrating directly from Oracle 11 on the source to Oracle 12 on the target. This ability to migrate and update in one step is one of the most attractive features of XenoBridge. Upgrades can affect the application code so some up-front work is required to see if any code changes are needed.

Key considerations

There are a number of metrics to consider when we do a database migration, and we have listed a few of the key ones for your consideration:

  • Source database vendor and version
  • Target database vendor and version
  • Will upgrades be required?
  • Size of the production instances (MB, GB, TB)
  • Available downtime window
  • Location of the source and target systems and connection speed between them
  • Number of databases to be migrated
  • Availability of a test plan

Testing with mock migrations is still key

Database migrations use the same “mock/test” migration process described in my previous blog for ISV migrations. All databases will go through multiple mock migrations, with ample test time in between to thoroughly test the database prior to going into production on the new target systems. It’s unlikely, after the mock migrations, that something will cause the production migration to have problems, but if we suspect a problem, we simply stop the migration process without harming the original source database.

The key points related to database migrations

  • All database migrations, regardless of size, utilize the same basic process.
  • We use automated scripts and tools that reduce risk and cost.
  • Mock migrations will ensure a database has been thoroughly tested before cutover.
  • The migration can be stopped and started over if a problem is suspected without fear of corrupting the original source.

As always, working with consultants who have years of experience doing these migrations is the best way to reduce risk. Reach out to IBM Systems Lab Services Migration Factory if we can help with your next project.

What’s next?

In upcoming blog posts, I’ll talk about some of the other common worries we see—worries about how much a migration will cost and how long it will take, as well as how it will affect the company’s operations and employee skills. Please stay tuned!

Thanks to Rick Murphy and Mark Short for their contributions to this article.

More Services stories

IBM i – The driverless variant of IT infrastructure

Big data & analytics, Modern data platforms, Power servers

I am, quite frankly, not looking forward to the advent of autonomous automobiles. I happily drive a sporty, stick-shift vehicle myself. But what does intrigue me is autonomous IT infrastructure. I keep track of developments in autonomous IT and consider it to be an early-stage trend that will not reach maturity for several more years. more

Seven ways IBM PowerVC can make IT operations more nimble

AI, IBM Systems Lab Services, Power Systems

Every day, organizations face the task of managing their IBM Power Systems infrastructure and virtualization. Operations teams always have to be on their toes to keep up with the ever-increasing demands of logical partition (LPAR) deployments, decommissions, storage volume management, SAN zoning, managing standardized OS image catalogues and whatnot. But how can you manage these more

What is the ROI of IBM Power Systems for SAP HANA?

Power servers, Power Systems, Workload & resource optimization

Infrastructure plays a critical role in the success of SAP HANA deployments. Organizations deploy SAP HANA applications to streamline business processes and generate real-time insights. However, exploiting these capabilities place massive scalability and availability demands on the IT infrastructure. These demands need to be met in an environment that constantly changes with the business needs. more