Migrating a Db2 database from an AIX Power system to a Linux x86 system

Because of the differences in endianness between Power processors and x86 processors, migrating from an AIX Power system to a Linux x86 system cannot be done by using a Db2 Backup and Restore operation. Instead, you must unload your source database data, convert it to the target format, and then reload on the target database. This process is most easily done using the db2look and db2move utilities.

Before you begin

This procedure assumes that you have completed the following prerequisite tasks:
  • You have created a target instance of Db2®, either on an on-premises x86 server or on a cloud-hosted x86 VM.
  • You have configured access to your Db2 instance to match that of your source system. For example, if you want to maintain user authentication by using Kerberos and you are restoring your backup image to a cloud instance, ensure that the target system supports Kerberos.
When moving a Db2 database instance to a cloud instance, if you work with large object (LOB) data consider reconfiguring any table spaces and buffer pools that you use.
  • Separate LOB objects into a separate table space that is configured with file system caching enabled. This configuration improves the performance of insert operations with LOB data to your cloud instance.
  • Select storage devices based on the required input/output operations per second (IOPS) rating.
  • Place indexes into a separate buffer pool to ensure the pages stay in memory longer.
  • Place highly active tables into separate table spaces, with separate buffer pools.
  • Place interdependent tables in the same table space so they can be recovered as a group. Db2 native backup and recovery granularity is at the table space level.
  • Eliminate the use of SMS/DMS containers and move to Automatic Storage

About this task

This task is comprised of the following high-level steps:
  • Extracting data from your source AIX Power Db2 system with db2look.
  • Creating an empty database on the target Linux x86 system.
  • Creating table spaces and buffer pools on the target system.
  • Loading your extracted data onto your target system with db2move.

Procedure

  1. Run db2look to create DDL statements from your source Db2 database.
     #db2look -d <DBname>-a -e -m -l -x -f -o <DBname>.sql
    where
    • ‐d <DBname> is the alias name of the database that is to be moved
    • ‐a generates DDL statements for objects that were created by any user, including inoperative objects
    • ‐e extracts DDL statements.
    • ‐m generates the UPDATE statements that are required to replicate the statistics on tables, statistical views, columns, and indexes.
    • ‐l generates DDL statements for user‐defined table spaces, user‐defined database partition groups, and user‐defined buffer pools.
    • ‐x generates authorization DDL statements such as GRANT statements.
    • ‐f extracts the configuration parameters and registry variables that affect the query optimizer.
    • ‐o writes the output to a <DBname>.sql file.
    • ‐ct generates DDL statements by object creation time.
    The db2look command generates DDL statements by object type.
    Note: While db2look recognizes user-defined functions and stored procedures, all other objects that reside under the SYSTOOLS schema are ignored. In addition, statements are created for schemas, tables, foreign keys, and triggers in SQL files.
  2. Go through the output file, <dbname>.sql, and separate the foreign keys and the triggers. Save them to separate files, <dbname>_foreignkey.sql and <dbname>_trigger.sql respectively.
    Running the DBname.sql file without separating the foreign keys can generate errors resulting from missing primary keys at import time. Imported rows are then rejected because of dependencies between tables.
  3. Run db2move to export your data.
    Export all data from the source server (AIX) to PC/IXF format. Export all the tables that meet the filtering criteria according to the option specified.
    db2move <DBname> Export
    Note: If you do not specify an option, then all tables are exported.
  4. Copy the files that you extracted in step 1, that you created in step 2, and the exported data, to any directory on the RHEL destination system.
  5. Connect to the source database and capture the output of the following command to grant a similar level permission to the connection ID.
    “db2 "select char(grantee,8) as grantee, char(granteetype,1) as type, \ char(dbadmauth,1) as
    dbadm, char(createtabauth,1) as createtab, \ char(bindaddauth,1) as bindadd,
    char(connectauth,1) as connect, \ char(nofenceauth,1) as nofence, char(implschemaauth,1) as
    implschema, \ char(loadauth,1) as load, char(externalroutineauth,1) as extroutine, \
    char(quiesceconnectauth,1) as quiesceconn, \ char(libraryadmauth,1) as libadm,
    char(securityadmauth,1) \ as securityadm from syscat.dbauth order by grantee"
  6. Create an empty database on the target Linux Db2 instance.
    Note: To improve performance, consider setting the automatic storage option across multiple file systems.
    db2 ‘CREATE DATABASE <DBname> AUTOMATIC STORAGE YES on PATH /db2fs1, /db2fs2,
    /db2fs3 DBPATH /db2fs4’;
    Db2 ‘CONNECT TO <DBname>’;
  7. Run the DBname.sql script to create a tables and indexes.
    Important: To avoid errors, do not run DBname_Foreignkey.sql or Dbname_Trigger.sql before the import.
    db2 -tvf DBname.sql 
  8. Grant similar level permissions to those that you captured in step 5 to the connection ID and instance.
  9. Import all the tables listed in the db2move.lst internal staging file. Use the ‐io option for import‐specific actions. The import operation loads the actual data into the blank tables, which are created by running the db2 ‐tvf DBname.sql command.
    Note: Loading data into tables containing XML columns is only supported for the LOAD operation and not for the COPY action. Instead, issue the IMPORT or EXPORT commands, or use the db2move Export and db2move Import behaviors. If these tables also contain GENERATED ALWAYS identity columns, data cannot be imported into the tables.
  10. Run the DBname_Foreignkey.sql and DBname_Trigger.sql scripts separately to reestablish the foreign key relationships and triggers.
    db2 -tvf DBname_Foreignkey.sql db2 -tvf DBname_Trigger.sql 
  11. Run a db2rbind operation to recreate the package and to commit any transactions that have not been committed. Note that auto-commit must be enabled.
    db2rbind <DBname>
  12. Run a reorgchk operation on all table indexes for your database.
    db2 connect to <DBname>
    db2 reorgchk update statistics on table all
    db2 terminate
  13. Issue a runstats on all tables to ensure statistics are up to date.

What to do next

Before enabling a connection between the database and an application, take a backup of the database.