ADMIN_MOVE_TABLE procedure - Move tables online

The ADMIN_MOVE_TABLE stored procedure moves the data in an active table into a new table object with the same name, while the data remains online and available for access.

This stored procedure creates a protocol table composed of rows that contain status information and configuration options related to the table to be moved. The return set from this procedure is the set of rows from the protocol table related to the table to be moved.

This stored procedure uses the following terminology:
Source table
The original table name that is passed in as a parameter into the stored procedure. This is the table to be moved.
Target table
A table created by the stored procedure using the table definition passed in through the stored procedure. All of the data from the source table is copied into this table and then it is renamed to the same name as the source table.
Staging table
A table created by the stored procedure. The staging table stores any update, delete or insert changes that occur on the source table during the execution of the table move. This table is dropped when the move is complete.

Authorization

You must have SQLADM or DBADM authority to invoke the ADMIN_MOVE_TABLE stored procedure. You must also have the appropriate object creation authorities, including authorities to issue the SELECT statement on the source table, and to issue the INSERT statement on the target table.

Default PUBLIC privilege

In a non-restrictive database, EXECUTE privilege is granted to PUBLIC when the procedure is automatically created.

Syntax

There are two equally valid methods to start the ADMIN_MOVE_TABLE stored procedure:
  • Use the first method to modify only certain parts of the table definition for the target table. For instance, if you had a table definition that is quite large (several KB), and all you want to do is modify the table spaces for the table, you can do so without having to determine the entire CREATE TABLE statement needed to recreate the source table. All you need to do is to fill out the data_tbsp, index_tbsp, and lob_tbsp parameters, leaving the other optional parameters blank.
  • The second method provides you with more control and flexibility. You can create the target table beforehand, rather than having the stored procedure create the target table. By creating the target table beforehand, you can create a target table that would not be possible using the first method.

Method 1:

Read syntax diagramSkip visual syntax diagramADMIN_MOVE_TABLE(tabschema,tabname, data_tbsp,index_tbsp,lob_tbsp, organize_by_clause , partkey_cols ,data_part, coldef ,  options ,operation)

Method 2:

Read syntax diagramSkip visual syntax diagramADMIN_MOVE_TABLE(tabschema,tabname, target_tabname,  options ,operation)

The schema for both methods is SYSPROC.

Procedure parameters

tabschema

This input parameter specifies the name of the schema which contains the table to be moved. This parameter is case sensitive and has a data type of VARCHAR(128). This parameter can be NULL or the empty string, if operation REPORT is used to monitor all table moves initiated. For operation ESTIMATE tabschema can be a LIKE pattern and then ESTIMATE get executed for all matching tables.

tabname

This input parameter specifies the name of the table to be moved. This parameter is case sensitive and has a data type of VARCHAR(128). This parameter can be NULL or the empty string, if operation REPORT is used to monitor all table moves initiated. For operation ESTIMATE tabname can be a LIKE pattern and then ESTIMATE get executed for all matching tables.

data_tbsp
This input parameter specifies the new data table space for the target table. If a value is provided, the index_tbsp and lob_tbsp parameters are required. If a value is not provided, the data table space of the source table is used. This parameter is case sensitive and has a data type of VARCHAR(128). This parameter can be NULL or the empty string.
index_tbsp
This input parameter specifies the new index table space for the target table. If a value is provided, the data_tbsp and lob_tbsp parameters are required. If a value is not provided, the index table space of the source table is used. This parameter is case sensitive and has a data type of VARCHAR(128). This parameter can be NULL or the empty string.
lob_tbsp
This input parameter specifies the new LOB table space for the target table. If a value is provided, the data_tbsp and index_tbsp parameters are required. If a value is not provided, the LOB table space of the source table is used. This parameter is case sensitive and has a data type of VARCHAR(128). This parameter can be NULL or the empty string.
organize_by_clause
This input parameter can be used to specify an ORGANIZE BY clause for the table. If the value provided does not begin with 'ORGANIZE BY' then it provides the multi-dimensional clustering (MDC) specification for the target table. The values are entered as a comma separated list of the columns used to cluster data in the target table along multiple dimensions. If a value of NULL or - is given, the ORGANIZE BY clause is not used. If an empty string or a single blank is given, the procedure checks whether there is an MDC or ITC specification on the source table, and uses that specification if located. If the argument begins with 'ORGANIZE BY' it can be used to specify any option related to the ORGANIZE BY clause of a CREATE TABLE statement. This parameter has a data type of VARCHAR(32672) and has the same format as the ORGANIZE BY DIMENSIONS clause of the CREATE TABLE statement. This parameter can be NULL, the empty string, or a single blank.

Example 1: 'C1, C4, (C3,C1), C2'

Example 2: ORGANIZE BY INSERT TIME

partkey_cols
This input parameter provides the partitioning key columns specification for the target table. The values are entered as a comma separated list of the key columns that specify how the data is distributed across multiple database partitions. If a value of NULL or minus sign (-) is given, the PARTITIONING KEY clause is not used. If an empty string or a single blank is given, the procedure checks the existing distribution of the source table, and uses that specification, if located. This approach can be used to make the target table a random distribution table if the source table was created as a random distribution table. This parameter has a data type of VARCHAR(32672) and has the same format as the DISTRIBUTE BY HASH clause of the CREATE TABLE statement.

Example: 'C1, C3'

data_part
This input parameter provides the data partitioning specification for the target table. This statement defines how to divide table data across multiple storage objects (called data partitions), according to the values in one or more of the table columns. If a value of NULL or minus sign (-) is given, the PARTITION BY RANGE clause is not used. If an empty string or a single blank is given, the procedure checks whether there is a data partition scheme on the source table, and uses that information (including partition name) if located. This parameter has a data type of VARCHAR(32672) and has the same format as the PARTITION BY RANGE clause of the CREATE TABLE statement.

Example: '(C1) (STARTING FROM (1) EXCLUSIVE ENDING AT (1000) EVERY (100))'

coldef
This input parameter specifies a new column definition for the target table, allowing you to change the column types as long as they are compatible; however, the column names must remain the same.

This also provides the ability to add new columns and drop existing columns. When adding a column, it must be defined as either nullable or have a default value set. Also, a column can only be dropped if there is a unique or primary index on the table and the column to be dropped is not a part of that unique or primary index. This parameter has a data type of VARCHAR(32672). This parameter can be NULL or the empty string.

Example: 'C1 INT, C2 INT DEFAULT 0'

target_tabname
This input parameter provides the name of an existing table to use as the target table during the move. The following changes can be made to the target table being passed in:
  • The data, index and LOB table spaces can be changed
  • The multi dimensional column (MDC) specification can be added or changed
  • The partitioning key columns specification can be added or changed
  • The data partitioning specification can be added or changed
  • Data compression can be added or removed
  • A new column definition can be specified; however the same restrictions as when specifying the coldef parameter apply here.
The following restrictions apply to the named table:
  • The table must exist in the same schema as the source table
  • The table must be empty
  • No typed tables, materialized query tables (MQT), staging tables, remote tables or clustered tables are permitted

If this parameter is set to NULL or the empty string, the stored procedure uses the same definition as the source table. This parameter is case sensitive and has a data type of VARCHAR(128).

Can be used if executed on single table. Table will not get dropped as it is created and provided by user.

options
This input parameter is a string comprised of comma separated values, which define any options used by the stored procedure. This list of options is not case sensitive and has a data type of VARCHAR(32672). The list value can be NULL or the empty string. The following values are valid:
ALLOW_READ_ACCESS
This option allows the move of column-organized tables or convert column-organized tables to row-based tables. The source table can be still accessed using SELECT statements, but any UPDATE, DELETE or INSERT statement issued against the source table will cause a SQL error.
KEEP
This option keeps a copy of the original source table under a different name. If the source table name is T1, then after the move that table will be automatically renamed to something such as T1AAAAVxo. You can retrieve the exact name of the source table in the returned protocol table, under the ORIGINAL key. You may set this option at any point up to and including the SWAP phase.

KEEP can be used, if executed on single table otherwise ignored. Then user is able to do further analysis on the sample.

COPY_USE_LOAD <load options>

Syntax for COPY_USE_LOAD

Read syntax diagramSkip visual syntax diagram MESSAGES ON SERVER NONRECOVERABLECOPYYESUSE TSMOPENnum-sessSESSIONSTO,device/directoryLOADlib-nameOPENnum-sessSESSIONS

If you specify any load options for COPY_USE_LOAD, ADMIN_MOVE_TABLE uses an ADMIN_CMD load to copy the data from the source table to the target table. If you do not specify any options for COPY_USE_LOAD, then the NONRECOVERABLE option the db2Load API is used to copy the data from the source table to the target table.

MESSAGES ON SERVER
Specifies that the message file created on the server by the LOAD command is to be retained in case of load failures. The WARNINGS entry in the protocol table contain the message retrieval SQL statement that is required to retrieve all the warnings and error messages that occur during load, and the message removal SQL statement that is required to clean up the messages. Note that with or without the clause, the fenced user ID must have the authority to create files under the directory indicated by the DB2_UTIL_MSGPATH registry variable.
COPY YES
Specifies that a copy of the loaded data will be saved. This option is invalid if forward recovery is disabled.
USE TSM
Specifies that the copy will be stored using Tivoli® Storage Manager (TSM).
OPEN num-sess SESSIONS
The number of I/O sessions to be used with TSM or the vendor product. The default value is 1.
TO device or directory
Specifies the device or directory on which the copy image will be created.
LOAD lib-name
The name of the shared library (DLL on Windows operating systems) containing the vendor backup and restore I/O functions to be used. It can contain the full path. If the full path is not given, it will default to the path where the user exit programs reside.
NONRECOVERABLE
Specifies that the load transaction is to be marked as nonrecoverable and that it will not be possible to recover it by a subsequent roll forward action. If COPY YES is not used, NONRECOVERABLE is the default.
COPY_USE_RID
The COPY_USE_RID option can be used to modify chunk size or to disable the RID scalar function-based copy method. You can use the following options with COPY_USE_RID:
COPY_USE_RID=0
Disables the RID-based copy method. The previous copy method gets used instead.
COPY_USE_RID=<row_amount>
Enable COPY_USE_RID method with a given row count. ADMIN_MOVE_TABLE rounds down the given value to the next power of 2.

If COPY_USE_LOAD and COPY_USE_RID gets specified, COPY_USE_LOAD overwrites COPY_USE_RID and LOAD gets used for copying the table. Bigger row count values can speed up processing, but increase the transaction size. This can increase the likelihood of running into a transaction log full situation. Reducing the value can cause a significant performance drop.

If COPY_USE_RID gets used for moving a row-based table, the COPY_USE_RID option gets ignored.

COPY_WITH_INDEXES
This option creates indexes before copying the source table; however, the default is to create the indexes after copying the source table. The advantages of this option are that index creation after copying requires a whole table scan per index and that the index creation is a transaction that requires active log space. If the LOGINDEXBUILD database configuration parameter is on, significant log space is required for building the indexes in a short time frame. One disadvantage of this option is that copy performance is reduced because indexes need to be maintained on the target table. Also, the resulting indexes may contain pseudo-deleted keys, and the indexes are not as well balanced as if the indexes were created after the copy. You may set the COPY_WITH_INDEXES option at any point up to and including the COPY phase.
FORCE
If the force option is set, the SWAP phase does not check to see if the source table has changed its table definition.
FORCE_ALL
Heavy workload on the source table might prevent ADMIN_MOVE_TABLE from completing the SWAP phase (SQL0911N reason code 2). In other words, if the source table is heavily used by other applications during the SWAP phase, ADMIN_MOVE_TABLE might rollback due to a deadlock (SQL0911N reason code 2).

Use this option to force all applications having workload on the source table. Forced applications will receive SQL1224N on all subsequent database manager requests.

Most of the time this option is not needed. Use FORCE_ALL with care. You must have SYSADM, SYSCTRL or SYSMAINT authority to perform SWAP with FORCE_ALL option.

NO_STATS
This option does not start RUNSTATS or any statistic copying on the target table. If you use the AUTO_RUNSTATS or AUTO_STMT_STATS database configuration parameters, the database system automatically creates new statistics afterward. For backwards compatibility, STATS_NO is also accepted. You can set the NO_STATS option at any point up to and including the SWAP phase.
COPY_STATS
This option copies the statistics from the source table to the target table before performing the swap. This may cause inaccurate physical statistics, especially if the page size is changed. However, setting this option saves computing time as RUNSTATS is not called to compute new statistics. Also, the optimizer may choose the same access plans, because the statistics are the same. For backwards compatibility, STATS_COPY is also accepted. You may set the STATS_COPY option at any point up to and including the SWAP phase.
NO_AUTO_REVAL
This option prevents automatic revalidation on the table, and instead, re-creates all triggers and views. The NO_AUTO_REVAL option can be set only in the INIT phase.
REORG
This option sets up an extra offline REORG on the target table before performing the swap. If you use this option to improve your compression dictionary, be advised that using the default sampling approach is a better method to create an optimal compression dictionary. However, if you require an optimal XML compression dictionary, then REORG is the only method. You may set the REORG option at any point up to and including the SWAP phase.
NO_TARGET_LOCKSIZE_TABLE
This option does not keep the LOCKSIZE table option on the target table during the COPY and SWAP phases. The default is to use the LOCKSIZE table option on the target table to prevent locking overhead, when no unique index is specified on the source table.
CLUSTER
This option reads the data from the source table with an ORDER BY clause when a copy index has been specified using ADMIN_MOVE_TABLE_UTIL, a clustering index exists on the source table or a unique index or primary key is defined in the source table.
Note: A copy index will override a clustering index; a clustering index will be used in preference to a primary key; a primary key will be used in preference to a unique index.
NON_CLUSTER
This option reads the data from the source table without an ORDER BY clause regardless of whether a copy index has been specified, a clustering index exists on the source table, or a unique index or primary key has been defined in the source table.
Note: When neither CLUSTER or NON_CLUSTER options are specified, ADMIN_MOVE_TABLE will read the data from the source table with an ORDER BY clause only when a clustering index exists on the source table.
LOAD_MSGPATH <path>
This option can be used to define the load message file path.
LOAD_MSGPATH can be used together with COPY_USE_LOAD in the following situations:
  • COPY_USE_LOAD is specified without further options (in other words, the default COPY_USE_LOAD NONRECOVERABLE is assumed)
  • COPY_USE_LOAD NONRECOVERABLE is specified explicitly

LOAD_MSGPATH cannot be used together with COPY_USE_LOAD when optional keywords are also specified with COPY_USE_LOAD.

If the LOAD_MSGPATH option is not specified, then the default path is determined by the DB2_UTIL_MSGPATH registry variable.

LOAD_TEMPPATH
Use this option to define the load temporary file path.
ALLOW_RLBAC
Use this option to move a table that has row-level label-based access control (LBAC) security label specified. Before you use this option, ensure that you have adequate privilege; if you lack the privilege to access all the table row entries, data loss can occur.
NOT_ENFORCED
Specify this option for the conversion of tables with enforced check constraints or foreign key (referential integrity) constraints that are not supported on the target table; otherwise, an error is returned (SQL1667N).
WITHOUT_NONUNIQUE_INDEXES

Specify this option to control the creation of non-unique user defined indexes on the moved table. The default behavior is to leave all indexes unchanged; all indexes on the source table are created on the target table, where possible. If this option is specified, creation of non-unique indexes is skipped on the target table.

This option is available starting in Db2® 11.1.4.4

ADD_MODSTATE_INDEX or REMOVE_MODSTATE_INDEX or SYNC_MODSTATE_INDEX
Use these options to control the creation or deletion of the modification state index on the moved table. If none of these options are specified, the ADMIN_MOVE_TABLE operation ignores the status of the modification state index of the source and target tables.

This option is available starting in Db2 11.1.4.4

ADD_MODSTATE_INDEX

Specify this option to allow explicit creation of the modification state index on the moved table.

REMOVE_MODSTATE_INDEX

Specify this option to allow explicit deletion of the modification state index on the moved table.

SYNC_MODSTATE_INDEX

Specify this option to automatically adopt the status of the modification state index from the source table.

operation
This input parameter specifies which operation the stored procedure is to execute. There are two ways of calling the stored procedure: using the MOVE command to execute all the operations at one time; or by using the individual commands to execute the table move one step at a time. The main advantage of this second method is that you control when the SWAP phase actually occurs, thereby determining when the table is briefly taken offline. This allows you to make the move during a period of low system activity. If you use the individual commands, they must be called in the following order: INIT, COPY, REPLAY, VERIFY (optional), SYNC (optional) and SWAP.
  • MOVE: Performs the entire table move (INIT, COPY, REPLAY, and SWAP operations) in one step.
  • INIT: Verifies that a table move can take place, and initializes all of the data needed during the table move process (the target table, staging table, and the triggers on the source table).
  • COPY: Copies the content from the source table to the target table. Any updates, deletes, or inserts occurring on the source table during this time are captured and stored in the staging table. New indexes are created at the end of the COPY phase, unless the COPY_WITH_INDEXES option is selected. Also, if needed, secondary indexes are created on the source and target tables to improve performance during the REPLAY phase. COPY can be used only after the INIT phase has completed.
  • REDIRECT: Forwards changes directly to the target table instead of capturing the changes in the staging table.
    Note: For tables with XML columns, the REDIRECT command does not work on multi-partitioned systems on tables that do not have a unique index.
  • REVERT: Reverts to the original behavior wherein the staging table captures the changes.
  • REPLAY: Copies into the target table any rows that have changed in the source table since the COPY phase began. REPLAY can be used only after the COPY phase has completed.
  • VERIFY: Optionally checks if the table contents of the source and target tables are identical. This process involves obtaining a shared lock on the source and target tables, replaying any changes that have occurred on the source table, and then performing a comparison. If the table has a unique index, this command compares all values between columns that are in both tables. Otherwise, this command compares all values between columns that are in both tables (except for LONG, LOB or XML columns). This is an expensive operation and caution should be taken to decide if it is useful for your move. VERIFY can be used only after the COPY or REPLAY phases have completed.
  • SYNC: Executes the REPLAY phase until the number of changes applied during the last scan of the staging table is less than the REPLAY_THRESHOLD value stored in the protocol table. Synchronizes the indexes of source and target table, if required. Updates the statistics of the target table, if required. Use this operation after COPY or REPLAY in order to even further reduce the time needed for the SWAP operation.
  • SWAP: Executes the REPLAY phase until the number of changes applied during the last scan of the staging table is less than the REPLAY_THRESHOLD value stored in the protocol table. The source table is then taken offline briefly to finish the final REPLAY, and then this command swaps the source table with target table and brings the table back online. SWAP can be used after the COPY phase has completed, but ideally after the REPLAY phase has been called.
  • CLEANUP: Drops the staging table, any non-unique indexes or triggers created on the source table by the stored procedure, and the source table if the KEEP option has not been set. CLEANUP can be called if the command failed during the SWAP phase.
  • CANCEL: Cancels a multi-step table move while between phases, or cancels a failed table move operation. Executing this command requires that the operation status is not in COMPLETED or CLEANUP state. CANCEL clears up all intermediate data (the indexes, the staging table, the target table, and the triggers on the source table).
  • TERM: Terminates a running or killed table move. TERM will kill a running table move, roll back all open transactions and set the table move to a well defined operation status. From here, the table move can be canceled or continued. You must have SYSADM, SYSCTRL or SYSMAINT authority to perform TERM.
  • REPORT: Calculates a set of values to monitor the progress of a single or multiple table moves. Focus is the COPY and REPLAY phase of a running table move. To get values for all table moves, tabschema and tabname must be NULL or the empty string.
  • ESTIMATE: This option calculates an estimate of the table size and PCTPAGESSAVED values of a target table, if the ADMIN_MOVE_TABLE were to be performed using the same specified parameters. A sample size of 1 million rows (or the total number of rows in the source table, if less the 1 million) is used. A temporary staging table is created and loaded in order to calculate the estimate, and then dropped. See the example section below for several examples.

    This option is available starting in Db2 11.1.4.4

  • TRACE: Attempts to identify issues that occur during an operation.

    You enable the TRACE function by adding the TRACE keyword to the operation procedure arguments:

    $ db2 "CALL SYSPROC.ADMIN_MOVE_TABLE('DARIBAZS', 't1', 'MY_TBS2', 'MY_TBS2', 'MY_TBS2', '', '', '', '', '', 'MOVE,TRACE')"
    The results of running a TRACE are found in the diagnostic data directory (DIAGPATH).
    You can locate the results by searching for the name of the associated operation (for example, 'MOVE'):
    $ ls *MOVE* 
    11141248.1544.AMT_DARIBAZS_t1_MOVE_trc.txt 
    0146932.1029.AMT_DARIBAZS_t1_MOVE_trc.txt
This parameter is not case sensitive and has a data type of VARCHAR(128).

Information returned

Table 1. Information returned by the ADMIN_MOVE_TABLE stored procedure
Column name Data type Description
TABSCHEMA VARCHAR(128) table_schema - Table schema name monitor element
TABNAME VARCHAR(128) table_name - Table name monitor element
KEY VARCHAR(32) Name of the attribute.
VALUE CLOB(10M) Value of the attribute.

The key and value pairs that are returned in the result set can be found in Table 2. To modify the user configurable keys in the result set, use the ADMIN_MOVE_TABLE_UTIL stored procedure.

Table 2. Key and value pairs returned by the ADMIN_MOVE_TABLE stored procedure
Key Return Value User Configurable
AGENT_ID Displays the application_handle monitor element during the runtime of ADMIN_MOVE_TABLE. No
APPLICATION_ID Displays the appl_id monitor element during the runtime of ADMIN_MOVE_TABLE. No
AUTHID Displays the authorization ID of the user who called the stored procedure. No
CANCEL_START Displays the CANCEL phase start time. No
CLEANUP_END Displays the CLEANUP phase end time. No
CLEANUP_START Displays the CLEANUP phase start time. No
COMMIT_AFTER_N_ROWS During the COPY phase, a commit is executed after this many rows are copied. 0 means no commits during COPY. Default value is 10000. Yes
COPY_ARRAY_SIZE Specifies the ARRAY size for COPY_ARRAY_INSERT. A value less than or equal to 0 means do not use COPY_ARRAY_INSERT. Default value is 100. Yes
COPY_END Displays the COPY phase end time. No
COPY_INDEXNAME The name of the index used to cluster the data on the target table during the COPY phase. This value must be set before the COPY phase. The default name is the name of a cluster index on the source table, if it exists; otherwise the name of the unique or primary index on the source table. Yes
COPY_INDEXSCHEMA The schema of the index used to cluster the data on the target table during the COPY phase. This value must be set before the COPY phase. The default schema is the schema name of a cluster index on the source table, if it exists; otherwise the schema name of the unique or primary index on the source table. Yes
COPY_OPTS The copy options used during the COPY phase. No
COPY_START Displays the COPY phase start time. No
COPY_TOTAL_ROWS Displays the total number of rows copied during the COPY phase. No
DEEPCOMPRESSION_SAMPLE If the source table has compression enabled, this field specifies how much data (in KB) is sampled when creating a dictionary for compression. 0 means no sampling is done. Default value is 20MB (20480 KB). Yes
INDEX_CREATION_TOTAL_TIME Displays the total time required for creating secondary indexes. No
INDEXNAME Displays the name of the index or the empty string if the table does not have an index. No
INDEXSCHEMA Displays the schema of the index or the empty string if the table does not have an index. No
INIT_END Displays the INIT phase end time. No
INIT_START Displays the INIT phase start time. No
LOCK Displays the LOCK start time if another online table move stored procedure call is active, otherwise it is empty. No
ORIGINAL Displays the name of original table after the swap. No
REORG_USE_TEMPSPACE If you call the REORG option, you can also specify a temporary table space for the USE clause of the REORG command. If a value is not specified here, the REORG command uses the same table space as the table being reorganized. Yes
REPLAY_END Displays the REPLAY phase end time. No
REPLAY_MAX_ERR_RETRIES Specifies the maximum retry count for errors (lock timeouts or deadlocks) that may occur during the REPLAY phase. Default value is 100. Yes
REPLAY_START Displays the REPLAY phase start time. No
REPLAY_THRESHOLD For a single iteration of the REPLAY phase, if the number of rows applied to the staging table is less than this value, then REPLAY stops, even if new entries are made in the meantime. Default value is 100. Yes
REPLAY_TOTAL_ROWS Displays the accumulated number of replayed rows. No
REPLAY_TOTAL_TIME Displays the accumulated time in seconds used for replaying rows. No
STAGING Displays the name of the staging table. No
STATUS Displays the current status of the online table move:
  • INIT: INIT is in progress
  • COPY: COPY is in progress or is possible
  • REPLAY: REPLAY is in progress or REPLAY and SWAP are possible
  • CLEANUP: MOVE is complete, but cleanup has not finished or CLEANUP is possible
  • CANCEL: CANCEL is in progress, but cancel has not finished or CANCEL is possible
  • COMPLETE: MOVE and CLEANUP are complete
  • COMPLETE_WITH_WARNINGS: MOVE and CLEANUP are complete, however there are warnings (listed under the WARNINGS key).
No
SWAP_END Displays the SWAP phase end time. No
SWAP_MAX_RETRIES Specifies the maximum number of retries allowed during the SWAP phase (if lock timeouts or deadlocks occur). Default value is 10. Yes
SWAP_RETRIES Displays the number of retries performed during SWAP phase. No
SWAP_START Displays the SWAP phase start time. No
TARGET Displays the name of the target table. No
TERM_END Displays the TERM phase end time. No
TERM_START Displays the TERM phase start time. No
UTILITY_INVOCATION_ID Displays the unique identifier for the table move operation. No
VERIFY_END Displays the verification end time. No
VERIFY_START Displays the verification start time. No
VERSION Displays the version of the stored procedure. No
WARNINGS Displays warnings to pass on to the user. These warnings include:
  • Revalidation of all failed objects
  • Control could not be granted to a user, group, or role
  • An index was not created because a column it references no longer exists
No

If operation REPORT is used, monitoring values will be inserted into a Declared Global Temporary Table (DGTT) named SESSION.ADMIN_MOVE_TABLE.

Table 3. SESSION.ADMIN_MOVE_TABLE returned by the operation REPORT
Column name Data type Description
TABSCHEMA VARCHAR(128) Table schema of the table move to be reported. Value gets retrieved from protocol table.
TABNAME VARCHAR(128) Table name of the table move to be reported. Value gets retrieved from protocol table.
STATUS VARCHAR(22) Status of the table move. Value gets retrieved from protocol table.
AGENT_ID BIGINT Agent ID of a running table move. Value gets retrieved from protocol table.
INIT_START TIMESTAMP Start time of INIT operation of the table move. Value gets retrieved from protocol table.
COPY_THROUGHPUT BIGINT COPY performance.
  • Formula: COPY_TOTAL_ROWS/(MIN(CURRENT_TIMESTAMP, COPY_END)-COPY_START)
  • Unit: rows/second
  • NULL, if calculation is not possible
COPY_ECT TIMESTAMP Estimated COPY completion time. The values is calculated using COPY_THROUGPUT, COPY_TOTAL_ROWS and CARD.
  • NULL, if calculation is not possible or the COPY phase has ended.
ROWS_STAGING BIGINT Number of rows in the staging table. Calculated using MON_GET_TABLE values rows_inserted - rows_deleted.
  • Unit: rows
  • NULL, if calculation is not possible
REPLAY_THROUGHPUT BIGINT REPLAY performance.
  • Formula: REPLAYED_TOTAL_ROWS/REPLAY_TOTAL_TIME
  • Unit: rows/second
  • NULL, if calculation is not possible
INFLOW_STAGING BIGINT Number of rows per second that currently get inserted from staging table.
  • Unit: rows/second
  • NULL, if calculation is not possible
OUTFLOW_STAGING BIGINT Number of rows per second that currently get removed from staging table.
  • Unit: rows/second
  • NULL, if calculation is not possible
GROWTH_STAGING BIGINT Growth of staging table.
  • Formula: OUTFLOW_STAGING-INFLOW_STAGING
  • Unit: rows/second
  • NULL, if calculation is not possible
REPLAY_ECT TIMESTAMP Estimated REPLAY completion time. Special value 9999-12-31-23:59:59 indicates that under current progress and workload conditions, the move will not end. NULL, if calculation is not possible.
AVG_ROWS_PER_KEY DOUBLE The average number of rows, the replay key in staging table will touch.
  • Formula: CARD/FULLKEYCARD
  • The input values for the formula get retrieved from SYSCAT.INDEXES and SYSCAT.TABLES
  • FULLKEYCARD gets retrieved for the replay index.
  • 1, if no statistics are available.
CARD BIGINT The number of rows in the source table. Value gets retrieved from SYSCAT.TABLES

If operation REPORT is used, the result set is a subset of the DGTT and depends on reporting a single table move or all table moves initiated.

Result set to monitor a single table move:
  • STATUS
  • AGENT_ID
  • INIT_START
  • COPY_THROUGHPUT
  • COPY_ECT
  • ROWS_STAGING
  • REPLAY_THROUGHPUT
  • INFLOW_STAGING
  • OUTFLOW_STAGING
  • GROWTH_STAGING
  • REPLAY_ECT
Result set to monitor all table moves initiated:
  • TABSCHEMA
  • TABNAME
  • STATUS
  • AGENT_ID
  • COPY_ECT
  • GROWTH_STAGING

If the ESTIMATE command is run, values are inserted into a Declared Global Temporary Table (DGTT) named SESSION.ADMIN_MOVE_TABLE. This table contains an estimate of the size that table would be after a move.

Table 4. SESSION.ADMIN_MOVE_TABLE returned by the operation ESTIMATE
Column Name Data Type Description
TABSCHEMA VARCHAR(128) table schema
TABNAME VARCHAR(128) table name
ROWS_SAMPLED BIGINT #rows used to create the new dictionary
SOURCE_SIZE BIGINT size of source table in KB (SOURCE_PAGESIZE * SOURCE_NPAGES)/1024
TARGET_SIZE BIGINT estimated size of target table in KB. The estimated size is calculated with the following formula: (TARGET_NPAGES * TARGET_PAGESIZE * SOURCE_CARD / ROWS_SAMPLED)/1024
SOURCE_PCTPAGESSAVED SMALLINT PCTPAGESSAVED of source table (value retrieved from SYSCAT.TABLES)
TARGET_PCTPAGESSAVED SMALLINT estimated PCTPAGESSAVED (result of RUNSTATS on sample)
SOURCE_PAGESIZE INTEGER PAGESIZE in bytes retrieved from SYSCAT.TABLESPACES
TARGET_PAGESIZE INTEGER PAGESIZE of target table in bytes retrieved from SYSCAT.TABLESPACES
SOURCE_NPAGES BIGINT NPAGES of source table retrieved from SYSCAT.TABLES
The result set of the SESSION.ADMIN_MOVE_TABLE is a subset of the DGTT:
  • TABSCHEMA
  • TABNAME
  • ROWS_SAMPLED
  • SOURCE_SIZE
  • TARGET_SIZE
  • SOURCE_PCTPAGESSAVED
  • TARGET_PCTPAGESSAVED

Usage notes

Suggestions for best results when using this procedure
  • Avoid making multiple moves into same table space at the same time. This prevents fragmentation on the target table space.
  • Run this procedure when activity on the table is low. Avoid mass data loads or deletes so that parallel read access is not a problem.
  • Use a multi-step move operation. The INIT and COPY phases can be called at any time. Execute the REPLAY phase multiple times in order to keep the staging table size small, and then issue the SWAP during a time of low activity on the table.
  • Check if offline methods are a better choice for your table move, especially when considering tables without unique indexes and for tables with no index.
Operations that are restricted on the source table
The stored procedure relies on triggers to capture any changes made to the source table. There are some operations that could affect the source table but which do not fire triggers. This could result in inconsistencies between the source and target table that cannot easily be detected by the stored procedures. These operations include:
  • TRUNCATE TABLE (without restrict when delete triggers)
  • IMPORT ... REPLACE INTO ...
  • LOAD TABLE
  • ALTER TABLE
  • REORG (both online and offline)
These operations will be restricted on the source table using a new table-level state flag. The flag is set during the INIT phase and cleared during the CLEANUP or CANCEL phase. Restricted operations will fail with SQL0668N reason code 10 (SQLSTATE 57007).
Operations that will affect the table move operation
There are operations that can cause the stored procedure to fail while a move is in progress. These operations include:
  • Dropping the SYSTOOLSPACE table space
  • Dropping/Renaming the source table
  • Dropping/Renaming any of the temporary objects created by OTM in the INIT phase (target table, staging table, triggers on source table, protocol table)
  • Altering values in the protocol table that are not listed as user configurable
Naming convention for temporary objects
To avoid naming conflicts when creating temporary objects, the following naming convention is used:
  • Postfix
    • t for target
    • s for staging
    • o for original
    • g for generated
    • i for insert trigger
    • d for delete trigger
    • u for before update trigger
    • v for after update trigger
  • Names are built consisting of <characters from name of object><base64 encoded hash key over name of object><postfix>.
  • If length of name would exceed object length (128 bytes) <characters from name of object> gets shorter.
  • Hash value gets calculated from the object name and is encoded similar to base64 encoding.

Sample:

Name of object: T1 
Staging object: T1AAAAVxs
Target object: T1AAAAVxt 
Original object: T1AAAAVxo 
Generated index: T1AAAAVxg (if table has no index) 
Insert trigger: T1AAAAVxi 
Delete trigger: T1AAAAVxd 
Before update trigger: T1AAAAVxu 
After update trigger: T1AAAAVxv
Online table move with compression and dictionary creation

There are several methods to create a data compression dictionary using Online Table Move. Compression must either be enabled on the source table or specified to be active in the new table definition if provided.

Create dictionary with sampling is the default method of Dictionary creation through Online Table Move. If compression is turned on for the table, then before performing the COPY operation, a system sampling of the data from the source table is inserted into the target table, where the amount of data sampled is specified in the DEEPCOMPRESSION_SAMPLE field in the protocol table. The compression dictionary is then created based off of this random sample, and therefore results in an optimal compression dictionary.

Please note, that an XML compression dictionary will not be created through the sampling method. This is due to the fact that db2Inspect is used to create the compression dictionary, and db2Inspect currently does not have the ability to create an XML compression dictionary. The XML compression dictionary will be created through automatic dictionary creation (ADC).

Create dictionary with automatic dictionary creation (ADC) is the standard method of dictionary creation with tables. When you turn on compression for the table, the database system automatically creates the dictionary as data is inserted into the table. This results in a non-optimal compression dictionary. Note that the DEEPCOMPRESSION_SAMPLE field in the protocol table must be set to 0 to avoid having the stored procedure attempt to create a better compression dictionary.

The create dictionary with REORG method of Dictionary creation results in a dictionary being created that reflects any activity on the source table that occurred while the COPY phase was in process. This is done by performing a REORG before the SWAP phase with the RESETDICTIONARY option set. An optimal dictionary will be created, however depending on the size of the table the REORG could take a long time. Also, if an optimal XML dictionary is required, REORG is the only method that will produce one. It is advised to use the sampling method of dictionary creation.

Online table move and statistics on the table

If you use the COPY_USE_LOAD option and the source table has a statistics profile, the load operation collects statistics on the target table.

By default, when you are moving a table where statistics are collected, the RUNSTATS command is issued on the table during the SWAP phase. The ADMIN_MOVE_TABLE procedure attempts to create statistics only if the target table does not yet have any statistics. If the target table does not yet have statistics and you want to minimize the duration of the SWAP phase, issue the RUNSTATS command before the SWAP phase. If a statistics profile is found, the RUNSTATS command is called by using the statistics profile. Otherwise, the RUNSTATS command is called with the WITH DISTRIBUTION ON COLUMNS (...) and AND SAMPLE DETAILED INDEXES ALL clauses.

If the COPY_STATS option has been set, the statistics from the source table are copied to the target table before performing the swap. Copying statistics may cause inaccurate physical statistics especially if changing page size. However, it will save on computing time as RUNSTATS does not have to be called to compute new statistics. Also, the optimizer may choose the same access plans, because the statistics are the same (plan stability). The statistics that are copied are in the SYSSTAT.TABLES, SYSSTAT.COLUMNS, SYSSTAT.COLDIST, SYSSTAT.INDEXES, and SYSSTAT.COLGROUPS catalog views.

If the NO_STATS option has been set, the RUNSTATS command is not issued, and statistics are not copied on the target table. If you use AUTO_RUNSTATS or AUTO_STMT_STATS, the database system automatically generates new statistics

Online table move with LOAD used for COPY

If you are using the COPY_USE_LOAD option, and if you do not specify a sub-option or you choose NONRECOVERABLE, then it is necessary to perform a backup of the target table space or table spaces before the SWAP phase in order to ensure recoverability. A backup can be created by issuing a statement such as the following:

BACKUP DB dbname TABLESPACE targetDataTablespace, 
    targetIndexTablespace, targetLongTablespace 
    ONLINE TO <destination>
Online table move with recoverable LOAD in HADR environment

If the destination for a recoverable LOAD in an HADR environment cannot be found from the standby, the table space will be inaccessible. The primary is not informed about this situation, so you might assume that the standby is up to date, but if there is a failover to the standby the table will not accessible.

Online table move with generated columns

The Table Move stored procedure treats any generated columns in the source table specially. The following paragraphs describe how the different types of generated columns are handled.

A row change timestamp column is a column that holds a timestamp representing the time when a row was last changed.

If a row change timestamp column is found in the source table, the values of this column after the table move operation is complete will not be the same as they where before the table move operation. The values of the column after the table move will represent the time at which the rows where inserted/updated in the new table object. This is done because the actual rows are being changed and the row change timestamp column values should therefore reflect these changes. If you use ALLOW_READ_ACCESS and COPY_USE_LOAD option of ADMIN_MOVE_TABLE, the row change timestamp column content of source table gets preserved during the move.

If a new table definition is supplied, and a column is defined as a row change timestamp column in the source table but not in the new table definition, then the column will not be a row change timestamp column.

An identity column is a column that automatically generates a value for the column when a row is inserted into the table.

If an identity column is found in the source table, the values of this column after the Table Move operation is complete will be identical to the values that were present before the table move operation. However, there is no way to determine the last/next value for the identity column in the source table. Therefore, when creating the identity column on the target table the value generation will be set to begin from the next uncached value. This is the same behavior that happens when the database restarts (stop/start). This behavior is documented in ALTER TABLE statement, under the SET NO CACHE or CACHE integer-constant heading of the identity-alteration section.

The column will initially be created as a regular column in the target table, and then be altered to be an identity column during the brief offline period of the SWAP phase. This is done because the column may have been created as GENERATED ALWAYS, and that would block the stored procedure from being able to insert the exact values from the source table into the column in the target table.

If a new table definition is specified, and a column is specified to be an identity column in the new table definition, then the stored procedure will check to see if the definition of the identity column matches the definition of the column in the source table. If they are a match, the stored procedure will continue as previously described. If they are not a match, the stored procedure will use the new identity column definition. Please note that this will restart the identity column counter with whatever the start value is specified as, however the current values of the rows in the column will remain the same.

If a new table definition is specified, and a column that is specified as an identity column in the source table is not specified as an identity column in the new table definition, then the stored procedure will still create the column as an identity column in the target table using the same specification found in the source table. This is done so that users do not need to look up the definition of the existing identity column and re-enter it into the new table definition. If the user does not want to keep the column as an Identity column, then they can alter the target table after the call to the stored procedure to remove the identity specification from the column.

An expression column is a column that automatically generates a value for the column based on an expression when a row is inserted into the table.

If an expression column is found in the source table, the values of this column after the Table move operation is complete will be identical to the values that were present before the table move operation.

The column will originally be created as a regular column in the target table, and then be altered to be an expression column during the brief offline period of the SWAP phase. This is done because expression columns are created as GENERATED ALWAYS, and do not allow inserts into that column. However, In order to alter the column in the target table to be an expression column, set integrity will briefly be turned off on the target table. The ALTER statement is performed, and then integrity is set back on with the GENERATED COLUMN IMMEDIATE UNCHECKED option.

If an expression column on the target table is defined as GENERATED ALWAYS and it is used a range partitioning key, the values are generated as defined by the expression and are written directly to the column instead of being transferred from the source to the target table.

The stored procedure will not support column expressions that include the table name (such as table 'T1' with expression (T1.C *5)) in either the source table or the target table. To remedy this, the user can alter the column to change the expression to not include the table name.

If a new table definition is specified, and a column is specified to be an expression column in the new table definition, then the stored procedure will check to see if the definition of the expression column matches the definition of the column in the source table by performing a basic string to string comparison. If they are a match, the stored procedure will continue as previously described. If they are not a match, the stored procedure will use the new expression column definition. Please note that the current values of the rows in the column will remain the same.

If a new table definition is specified, and a column that is specified as an expression column in the source table is not specified as an expression column in the new table definition, then the stored procedure will still create the column as an expression column in the target table using the same specification found in the source table. This is done so that users do not need to look up the definition of the existing expression column and re-enter it into the new table definition. If the user does not want to keep the column as an Expression column, then they can alter the target table after the call to the stored procedure to remove the Expression Specification from the column.

Online table move and objects and privileges that are preserved

The stored procedure will preserve the following objects when a Table Move is performed:

Views
During the brief offline period during the SWAP phase, the views are dropped from the source table and are re-created on the target table.
Transfer of ownership is also performed to change the ownership of the view back to the original owner.
The granted privileges for the views are preserved only if the auto_reval configuration parameter is set to DEFERRED or IMMEDIATE.
Triggers
During the brief offline period during the SWAP phase, the triggers are dropped from the source table and are re-created on the target table.
Transfer of ownership is also performed to change the ownership of the trigger back to the original owner.
Indexes
Indexes are created onto the target table at several times during the table move procedure. Indexes are first created at the end of the COPY phase, unless the COPY_WITH_INDEXES option is set then the indexes are first created at the beginning of the COPY phase. The store procedure then looks for any newly created indexes, judging by index name alone, at the beginning of the REPLAY and SWAP phases. If new indexes are found, they are created. However, the stored procedure does look to see if any indexes have been deleted on the source table.
The index names are the same as they were on the source table for user created indexes. However, system created indexes can not be guaranteed to have the same name.
The indexes that are preserved are of the following type: 'REG','CLUST', and 'XVIL'.
Any user created indexes that reference a column that is being dropped in the target table are not preserved.
When moving from a source partitioned table to a target partitioned table, the partitioned attribute of the index are preserved. When moving from a source partitioned table to a target non-partitioned table, or vice-versa, the partitioned attribute is decided by the default behavior of the database.
Constraints
Constraints (other than referential constraints) are re-created on the target table using the same constraint names. However, for unique and primary constraints the underlying index name may be different than the index name on the source table.
Table flags
The table flags of the source table are created on the target table as soon as the target table is created in the INIT phase. These flags are as follows: 'append_mode', 'locksize', 'volatile', 'compression', 'datacapture', 'pctfree', 'logindexbuild', 'owner', and 'droprule'. These flags are then checked at the end of the COPY phase and during the SWAP phase. If there are any changes in the flags, they are updated in the target table.

To keep the database recoverable and compatible with HADR setups, ADMIN_MOVE_TABLE does not copy the NOT LOGGED INITIALLY information from the source to the target table.

Grant/Revoke
During the SWAP phase, the stored procedure goes through the entries in SYSCAT.TABAUTH and reproduces the granting of privileges on the table to users/groups/roles.
If the caller of the stored procedure does not have ACCESSCTRL or SECADM authority, then the CONTROL privilege cannot be granted. A list of all users/groups/roles that were not granted the CONTROL privilege can be found in the protocol table where the key is WARNINGS.
Usage lists
During the brief offline period during the SWAP phase, usage lists defined on the source table or on the source table indexes are dropped and re-created on the target table. Any usage list that was in the active state before the move is re-activated after the move.

If auto_revalidation is enabled on the database, and the USE_AUTO_REVAL option is not set (which is the default if auto_revalidation is enabled), then the views are not be dropped as outlined previously. Instead, the views remain and are revalidated with auto_revalidation. Triggers are dropped and re-created by the stored procedure because there is currently a limitation with renaming a table with a trigger defined as the subject. There might also be invalid views after the stored procedure completes if the auto_reval configuration parameter is set to DEFERRED. Objects are revalidated automatically on the next access, or you can call the ADMIN_REVALIDATE_DB_OBJECTS to revalidate all objects in the database.

Online table move with clustering over an index

It is possible to cluster the target table by an index. If a cluster index is present on the source table, it will be clustered by that index by default. The default can be changed after the INIT phase (This implies phase wise execution of Online Table Move). Calling Online Table Move in one MOVE phase with no cluster index present will result in the stored procedure clustering the target table with the unique/primary index. If a cluster index exists, the stored procedure will cluster the target table using the cluster index.

If there is a cluster index on the source table, it is possible to not cluster the target table on the cluster index by performing a multi-step move and deleting the key entries COPY_INDEXSCHEMA and COPY_INDEXNAME from the protocol table after the INIT phase.

It is possible to cluster the target table by any secondary index by performing a multi-step move and inserting/updating the key entries COPY_INDEXSCHEMA and COPY_INDEXNAME in the protocol table with the required index to cluster the target table.

Online table move in a Db2 pureScale® environment
Prior to Version 11.5, in a Db2 pureScale environment, concurrent read access during index creation is the default behavior. Concurrent write access can be enabled by setting the registry variable DB2_INDEX_CREATE_ALLOW_WRITE to ON.

Starting in Version 11.5, in a Db2 pureScale environment, concurrent write access is enabled by default. For more information, see: DB2_INDEX_CREATE_ALLOW_WRITE.

If concurrent write access is not enabled, perform the following action before running the ADMIN_MOVE_TABLE procedure in a Db2 pureScale environment:

  • If an index is not defined on the source table, create an index manually before you move the table.
Changing index attributes

If a user wants to modify the attributes of any existing attributes (such as index clustering, index compression, change global to local indexes and vice versa) they can manually make these changes during a multi-step move operation.

This can be done by performing the INIT and COPY phases of the move via a multi-step move. Then manually make any changes to the indexes on the target table. The name of the target table can be found in the protocol table. After the modifications have finished, resume with the REPLAY and SWAP phases.

Using the ADMIN_MOVE_TABLE procedure to convert row-organized tables into column-organized tables
You can convert the tables in either of the following ways:
  • By specifying a column-organized target table
  • By specifying the ORGANIZE BY COLUMN clause for the organize_by_clause parameter.
The ADMIN_MOVE_TABLE stored procedure remains online.

If you convert a row-organized table into a column-organized table, applicable column-organized table restrictions on queries (that is, limited isolation levels) take effect at the end of processing, after the new table becomes visible to queries.

Unique indexes and non-unique indexes on column-organized tables are moved with the table if the options on the indexes are supported for those tables. The following index features are not supported on column-organized tables, and are ignored when the index is converted from row-organized to column-organized:
  • RANDOM key order
  • CLUSTER clause
  • INCLUDE columns

Other unsupported features will result in the move failing with sqlcodes such as SQL2103N or SQL1667N.

The ADMIN_MOVE_TABLE procedure requires triggers on the source table to capture changes. However, because this type of trigger is not yet supported on column-organized tables, the move of column-organized tables can be performed only when you use the ALLOW_READ_ACCESS option. By using the ALLOW_READ_ACCESS option, the delete and insert operations are not allowed on the source table, while the move is in progress.

The PARTITION BY RANGE clause is not supported for column-organized tables. To ignore the PARTITION BY RANGE clause, specify the minus sign (-) for the data_part parameter.

MDC or ITC tables are converted into column-organized tables when you specify the ORGANIZE BY COLUMN clause for the organize_by_clause parameter.

If you use the load utility to populate a column-organized target table, the ADMIN_MOVE_TABLE procedure inherits column-organized table restrictions that apply to the load utility.

The REORG option is not supported.

A long offline SWAP phase occurs when the table has foreign key (referential integrity) constraints that are defined on it, because the foreign key relationship must be checked.

Restrictions

The following restrictions apply to the ADMIN_MOVE_TABLE stored procedure:
  • Only simple tables are supported as the source table. No materialized query tables, typed tables, range clustered tables, system tables, views, nicknames, or aliases are permitted.
  • A table cannot be moved if there is an index with an expression-based key defined on the table.
  • A table cannot be moved if an event monitor is currently active on the table.
  • Tables without a unique index are subject to a complex and potentially expensive replay phase.
  • A generated column cannot be part of the MDC specification.
  • Be aware of the large disk space requirements, as the procedure creates two copies of the table and indexes, plus a staging table and log space.
  • Copy performance may be an issue as most of the data is moved to the new table using insert from select form.
  • The VERIFY operation for tables without a unique index does not work on tables with LOBs.
  • The SYSTOOLSPACE table space must be created and accessible to 'PUBLIC'.
  • Lock timeouts are possible during the COPY phase because of long running transactions on the source table.
  • Deadlocks can occur during the SWAP phase.
  • Deadlocks can occur on a source table with non-unique indexes and several update processes.
  • With VARCHAR2 support enabled, the database treats the empty string and NULL as equivalent values, but the single blank is a distinct value. With VARCHAR2 support enabled, the organize_by_clause, partkey_cols, and data_part parameters can use a single blank as distinct from the empty string and NULL.
  • A table cannot be moved if it is in the Set Integrity Pending state.
  • A table cannot be moved if there are any XSR objects dependent on it.
  • CHECK constraints with three part names are not supported and will return SQL0205. Use only the column name instead.
  • PARTITION BY RANGE clause is not supported for column-organized tables and for columns with the GENERATED clause.

Examples

  1. This example calls the stored procedure using the first method, where the target table is defined within the procedure, to move a table named T1 which is located in the schema SVALENTI.
    CALL SYSPROC.ADMIN_MOVE_TABLE(
    'SVALENTI',
    'T1',
    'ACCOUNTING',
    'ACCOUNT_IDX',
    'ACCOUNT_LONG',
    '',
    '',
    '',
    'CUSTOMER VARCHAR(80), REGION CHAR(5), YEAR INTEGER, CONTENTS CLOB',
    '',
    'MOVE')
    The following is an example of output from this query
    Result set 1
    ------------
    
    KEY                          VALUE
    ------------------------     -------------------------------------
    AUTHID                       SVALENTI
    CLEANUP_END                  2009-02-13-11.34.07.609575
    CLEANUP_START                2009-02-13-11.34.07.369331
    COPY_END                     2009-02-13-11.34.05.148018
    COPY_OPTS                    BY_KEY,OVER_INDEX
    COPY_START                   2009-02-13-11.34.04.841292
    COPY_TOTAL_ROWS              100
    INDEXNAME                    T1_INDEX
    INDEXSCHEMA                  SVALENTI
    INDEX_CREATION_TOTAL_TIME    0
    INIT_END                     2009-02-13-11.34.04.552875
    INIT_START                   2009-02-13-11.34.03.013563
    PAR_COLDEF                   CUSTOMER VARCHAR(80), REGION CHAR(5), 
                                     YEAR INTEGER, CONTENTS CLOB
    REPLAY_END                   2009-02-13-11.34.06.198369
    REPLAY_START                 2009-02-13-11.34.05.164582
    REPLAY_TOTAL_ROWS            100
    REPLAY_TOTAL_TIME            5
    STATUS                       COMPLETE
    SWAP_END                     2009-02-12-11.34.07.214447
    SWAP_RETRIES                 0
    SWAP_START                   2009-02-13-11.34.06.244506
    VERSION                      09.07.0000
    
    22 record(s) selected.
    
    Return Status = 0
  2. This example calls the stored procedure using the second method, where the target table is created outside the procedure and is then named within the target_tabname parameter, to move the same table as in the previous example.
    The first step is to create the table manually:
    CREATE TABLE SVALENTI.T1_TARGET (
       CUSTOMER VARCHAR(80), 
       REGION CHAR(5), 
       YEAR INTEGER, 
       CONTENTS CLOB) 
    IN ACCOUNTING 
    INDEX IN ACCOUNT_IDX 
    LONG IN ACCOUNT_LONG'
    Then call the stored procedure and pass in the name of the target table:
    CALL SYSPROC.ADMIN_MOVE_TABLE(
    'SVALENTI',
    'T1',
    'T1_TARGET',
    '',
    'MOVE')
    The following is an example of output from this query
    Result set 1
    ------------
    
    KEY                          VALUE
    ------------------------     -------------------------------------------
    AUTHID                       SVALENTI
    CLEANUP_END                  2009-02-13-11.37.49.283090
    CLEANUP_START                2009-02-13-11.37.49.125786
    COPY_END                     2009-02-13-11.37.47.806060
    COPY_OPTS                    BY_KEY,OVER_INDEX
    COPY_START                   2009-02-13-11.37.47.446616
    COPY_TOTAL_ROWS              0
    INDEXNAME                    T1_INDEX
    INDEXSCHEMA                  SVALENTI
    INDEX_CREATION_TOTAL_TIME    1
    INIT_END                     2009-02-13-11.37.47.287703
    INIT_START                   2009-02-13-11.37.46.052952
    PAR_COLDEF                   using a supplied target table so COLDEF 
                                     could be different
    REPLAY_END                   2009-02-13-11.37.48.785503
    REPLAY_START                 2009-02-13-11.37.47.822109
    REPLAY_TOTAL_ROWS            0
    REPLAY_TOTAL_TIME            0
    STATUS                       COMPLETE
    SWAP_END                     2009-02-13-11.37.48.977745
    SWAP_RETRIES                 0
    SWAP_START                   2009-02-13-11.37.48.825228
    VERSION                      09.07.0000
    22 record(s) selected.
    
    Return Status = 0
  3. In this example, the ADMIN_MOVE_TABLE stored procedure is used to convert the row-organized STAFF table into a column-organized table. This example specifies the existing STAFF2 table as the target table.
    CALL SYSPROC.ADMIN_MOVE_TABLE(
      'OTM01COL',
      'STAFF',
      'STAFF2',
      'COPY_USE_LOAD',
      'MOVE'
    )
  4. In this example, the ADMIN_MOVE_TABLE stored procedure is used to convert the row-organized STAFF table into a column-organized table without specifying a target table. The ORGANIZE BY COLUMN clause is specified as a parameter so that the target table is created as a column-organized table.
    CALL SYSPROC.ADMIN_MOVE_TABLE(
      'OTM01COL',
      'STAFF',
      '',
      '',
      '',
      'ORGANIZE BY COLUMN',
      '',
      '',
      '',
      'COPY_USE_LOAD',
      'MOVE'
    )
  5. Use the ADMIN_MOVE_TABLE stored procedure to convert the row-organized ACT table into a column-organized table. The ACT table has been altered to define an enforced foreign key constraint, as shown in the following example:
    ALTER TABLE "TANJINXU"."ACT"
      ADD CONSTRAINT "RPAA" FOREIGN KEY ("ACTNO")
        REFERENCES "TANJINXU"."ACT" ("ACTNO")
        ON DELETE RESTRICT
        ON UPDATE NO ACTION
        ENFORCED
        ENABLE QUERY OPTIMIZATION
    Because enforced foreign key (referential integrity) constraints are not supported on column-organized tables, the NOT_ENFORCED option must be specified so that the target table can be created as a column-organized table.
    CALL ADMIN_MOVE_TABLE(
      'TANJINXU',
      'ACT',
      '',
      '',
      '',
      'ORGANIZE BY COLUMN',
      '',
      '',
      '',
      'COPY_USE_LOAD,NOT_ENFORCED',
      'MOVE'
    )
  6. Check the status of a table move for a table named mytable by using the following query:
    SELECT * FROM SYSTOOLS.ADMIN_MOVE_TABLE WHERE KEY ='STATUS' AND TABNAME='mytable'
    If the status is not COMPLETED, you can call the stored procedure again, specifying the appropriate option:
    • If the status of the procedure is INIT, use the INIT operation, followed by other steps.
    • If the status of the procedure is COPY, use the COPY operation, followed by other steps.
    • If the status of the procedure is REPLAY, use the REPLAY or SWAP option.
    • If the status of the procedure is CLEANUP, use the CLEANUP option.

    If the status of an online table move is neither COMPLETED nor CLEANUP, you can call the stored procedure again, specifying the CANCEL option.

    If the status is not COMPLETED and the MOVE operation was specified, CANCEL the operation and start over, because the MOVE operation runs INIT, COPY, REPLAY, and SWAP operations in one step.

  7. Estimate for single table ROW->BLU

    ADMIN_MOVE_TABLE is called with operation ESTIMATE. The source table is organized by row. The target table is organized by column. The estimated size of the target table with a newly created dictionary is 4739KB. The estimated percent pages saved increased from 61% to 92%. A sample of 1 million rows of the source table where used for the estimation.

    call sysproc.admin_move_table('MJUNGFER','ZMJ1','','','','ORGANIZE BY COLUMN','','','','','ESTIMATE')
    
      Result set 1
      --------------
      TABSCHEMA TABNAME  ROWS_SAMPLED         SOURCE_SIZE          TARGET_SIZE          SOURCE_PCTPAGESSAVED TARGET_PCTPAGESSAVED
      --------- -------  -------------------- -------------------- -------------------- -------------------- --------------------
      MJUNGFER  ZMJ1                  1000000                23104                 4739                   61                   92
    
      1 record(s) selected.
    
      Return Status = 0
    The session table is queried after successful completion:
    select substr(tabschema,1,20) as tabschema,  substr(tabname,1,20) as tabname, SOURCE_SIZE, TARGET_SIZE,
    SOURCE_PCTPAGESSAVED, TARGET_PCTPAGESSAVED from session.admin_move_table
    
    TABSCHEMA            TABNAME              SOURCE_SIZE          TARGET_SIZE          SOURCE_PCTPAGESSAVED TARGET_PCTPAGESSAVED
    -------------------- -------------------- -------------------- -------------------- -------------------- --------------------
    MJUNGFER             ZMJ1                                23104                 4739                   61                   92
    
      1 record(s) selected.
  8. Estimate for single table BLU->BLU
    ADMIN_MOVE_TABLE is called with operation ESTIMATE. Both the source and target table are organized by column. The estimated size of the target table with a newly created dictionary is 3797KB. The estimated percent pages saved does not change significantly. A sample of 1 million rows of the source table where used for the estimation.
    call sysproc.admin_move_table('MJUNGFER','ZMJ1','','','','','','','','','ESTIMATE')
    
      Result set 1
      --------------
      TABSCHEMA TABNAME  ROWS_SAMPLED         SOURCE_SIZE          TARGET_SIZE          SOURCE_PCTPAGESSAVED TARGET_PCTPAGESSAVED
      --------- -------  -------------------- -------------------- -------------------- -------------------- --------------------
      MJUNGFER  ZMJ1                  1000000                4272                 3797                   91                   92
    
      1 record(s) selected.
    
      Return Status = 0
    The session table is queried after successful completion:
    select substr(tabschema,1,20) as tabschema,  substr(tabname,1,20) as tabname, SOURCE_SIZE, TARGET_SIZE,
    SOURCE_PCTPAGESSAVED, TARGET_PCTPAGESSAVED from session.admin_move_table
    
    TABSCHEMA            TABNAME              SOURCE_SIZE          TARGET_SIZE          SOURCE_PCTPAGESSAVED TARGET_PCTPAGESSAVED
    -------------------- -------------------- -------------------- -------------------- -------------------- --------------------
    MJUNGFER             ZMJ1                                4272                 3797                    91                   92
    
      1 record(s) selected.
  9. Estimate for single table BLU->ROW
    ADMIN_MOVE_TABLE is called with operation ESTIMATE. The source table is organized by column. The target table is organized by row. The estimated size of the target table with a newly created dictionary is 18269KB. The estimated percent pages saved decreased from 91% to 61%. A sample of 1000000 million rows of the source table where used for the estimation.
    call sysproc.admin_move_table('MJUNGFER','ZMJ1','','','','','','','','ALLOW_READ_ACCESS','ESTIMATE')
    
      Result set 1
      --------------
      TABSCHEMA TABNAME  ROWS_SAMPLED         SOURCE_SIZE          TARGET_SIZE          SOURCE_PCTPAGESSAVED TARGET_PCTPAGESSAVED
      --------- -------  -------------------- -------------------- -------------------- -------------------- --------------------
      MJUNGFER  ZMJ1                  1000000                 4216                18269                   91                   61
    
      1 record(s) selected.
    
      Return Status = 0
    The session table is queried after successful completion:
    select substr(tabschema,1,20) as tabschema,  substr(tabname,1,20) as tabname, SOURCE_SIZE, TARGET_SIZE,
    SOURCE_PCTPAGESSAVED, TARGET_PCTPAGESSAVED from session.admin_move_table
    
    TABSCHEMA            TABNAME              SOURCE_SIZE          TARGET_SIZE          SOURCE_PCTPAGESSAVED TARGET_PCTPAGESSAVED
    -------------------- -------------------- -------------------- -------------------- -------------------- --------------------
    MJUNGFER             ZMJ1                                 4216                18269                   91                   61
    
      1 record(s) selected.
  10. Estimate for entire schema
    call sysproc.admin_move_table('MJUNGFER','%',' ',' ',' ',' ',' ',' ',' ',' ','ESTIMATE')
    
  11. Estimate for subset of tables
    call sysproc.admin_move_table('SAPBW1','/BIC/%',' ',' ',' ',' ',' ',' ',' ',' ','ESTIMATE')