Descriptions of asnqcap parameters

These descriptions provide detail on the asnqcap parameters, their defaults, and why you might want to change the default in your environment.

activate (z/OS)

Default: activate=1140.0

Method of changing: When Q Capture starts

The activate parameter specifies the level of functionality that you want to enable for a Q Capture program. Under the Q Replication delivery model that began with Version 11.4 on z/OS®, you have the option of enabling or disabling new functions by using this parameter. This parameter is supported on z/OS only.

For example, you might install a PTF on z/OS that contains new functions, and you want to enable the functions that are included in the PTF. You would start Q Capture with the activate parameter and set its value to the newly available functional level. The initial function level for Version 11.4 is 1140.0. The first function level that includes new features is 1140.100. The function level for V10.2.1 is 1021.0.

The value that you set with this parameter is stored in the CURRENT_LEVEL column in the IBMQREP_CAPPARMS table. The limit for activate is the value of the POSSIBLE_LEVEL column of the IBMQREP_CAPPARMS table, which indicates the maximum functional level that can be set for Q Capture. If the level of the control tables does not support the functional level that you specify with the activate parameter, the ASN0734E message is issued and Q Capture does not start.

add_partition (Linux, UNIX, Windows)

Default: add_partition=n

Method of changing: When Q Capture starts

The add_partition parameter specifies whether a Q Capture program starts reading the Db2® recovery log for partitions that were added since the last time the Q Capture program was restarted.

Specify add_partition=y when starting a Q Capture program to have the Q Capture program read the log. On each new partition, when the Q Capture program is started in warm start mode, Q Capture will read the log file starting from the first log sequence number (LSN) that Db2 used after the first database CONNECT statement is issued for the Db2 instance.

Oracle sources: The add_partition parameter does not apply to Q Capture on Oracle sources, and has no effect if specified.

arm (z/OS)

Default: None

Method of changing: When Q Capture starts

Specifies a three-character alphanumeric string that is used to identify a single instance of the Q Capture program to the Automatic Restart Manager. The value that you supply is appended to the ARM element name that Q Capture generates for itself: ASNQCxxxxyyyy (where xxxx is the data-sharing group attach name, and yyyy is the Db2 member name). You can specify any length of string for the arm parameter, but the Q Capture program will concatenate only up to three characters to the current name. If necessary, the Q Capture program will pad the name with blanks to make a unique 16-byte name.

autostop

Default: autostop=n

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The autostop parameter controls whether a Q Capture program terminates when it reaches the end of the active Db2 or Oracle redo log. By default, a Q Capture program does not terminate after reaching the end of the log.

Typically, the Q Capture program is run as a continuous process whenever the source database is active, so in most cases you would keep the default (autostop=n). Set autostop=y only for scenarios where the Q Capture program is run at set intervals, such as when you synchronize infrequently connected systems, or in test scenarios.

If you set autostop=y, the Q Capture program retrieves all eligible transactions and stops when it reaches the end of the log. You need to start the Q Capture program again to retrieve more transactions.

caf (z/OS)

Default: caf=n

Method of changing: When Q Capture starts

By default (caf=n), the Q Capture program uses Resource Recovery Services (RRS) connect on Db2 for z/OS. You can override this default and prompt Q Capture to use the Call Attach Facility (CAF) by specifying caf=y.

If RRS is not available Q Capture switches to CAF. A message is issued to warn that the program was not able to connect because RRS is not started.

capstart_reorgcheck (z/OS)

Default for Db2 12 for z/OS: capstart_reorgcheck=n

Default for Db2 11 for z/OS: capstart_reorgcheck=y

Method of changing: When Q Capture starts

The capstart_reorgcheck parameter determines whether the Q Capture program checks before activating Q subscriptions to see if any DDL changes were made to source tables that might require a REORG before replication can begin.

Starting with Db2 12, the log read API (IFI 306) returns the before and after values of a row update log record, both encoded in the version of the table at the time the log record is written. In Db2 11, the before value is encoded in the table version at the time of the insert until further updates or a REORG can be done. By returning the before value in the same format as the after value, Db2 12 removes the requirement for REORG of the table space before a Q subscription is started for a table that was altered. This is true for a majority of cases but not all.

If Q Capture is behind in reading the Db2 logs and stopped, and if updates to the table are followed by a REORG of the table space, the Q subscription fails with a decode error because Db2 cannot provide the before values in the current version and Q Capture did not record information about the old version because the Q subscription was not active when the table was altered.

Recommendation: For Db2 12, you can change the default and specify CAPSTART_REORGCHECK=Y so that the REORG check is performed. If you specify CAPSTART_REORGCHECK=N and see the ASN0691E and ASN0748E messages with reason code 00C900A2 and Db2 secondary reason code 00C900B0, you must manually set the Q subscription state to I and start the Q subscription. These cases should be rare but are possible.

You can determine which tables had changes that require a REORG by running one of the queries in Rules for table space REORG after alters (z/OS).

capture_path

Default: None

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The capture_path parameter specifies the directory where a Q Capture program stores its work files and log file. By default, the path is the directory where you start the program. You can change this path.

z/OS
Because the Q Capture program is a POSIX application, the default path depends on how you start the program:
  • If you start a Q Capture program from a USS command line prompt, the path is the directory where you started the program.
  • If you start a Q Capture program using a started task or through JCL, the default path is the home directory in the USS file system of the user ID that is associated with the started task or job.

To change the path, you can specify either a path name or a high-level qualifier (HLQ), such as //QCAPV9. When you use an HLQ, sequential files are created that conform to the file naming conventions for z/OS sequential data set file names. The sequential data sets are relative to the user ID that is running the program. Otherwise these file names are similar to the names that are stored in an explicitly named directory path, with the HLQ concatenated as the first part of the file name. For example, sysadm.QCAPV9.filename. Using an HLQ might be convenient if you want to have the Q Capture log and LOADMSG files be system-managed (SMS).

If you want the Q Capture started task to write to a .log data set with a user ID other than the ID that is executing the task (for example TSOUSER), you must specify a single quotation mark (‘) as an escape character when using the SYSIN format for input parameters to the started task. For example, if you wanted to use the high-level qualifier JOESMITH, then the user ID TSOUSER that is running the Q Capture program must have RACF® authority to write data sets by using the high-level qualifier JOESMITH, as in the following example:

//SYSIN    DD  *
 CAPTURE_PATH=//'JOESMITH
/*     
Windows
If you start a Q Capture program as a Windows service, by default the program starts in the sqllib\bin subdirectory under the installation directory.

capture_schema

Default: capture_schema=ASN

The capture_schema parameter lets you distinguish between multiple instances of the Q Capture program on a Q Capture server.

The schema identifies one Q Capture program and its control tables. Two Q Capture programs with the same schema cannot run on a server.

Creating more than one copy of a Q Capture program on a Q Capture server allows you to improve throughput by dividing data flow into parallel streams, or meet different replication requirements while using the same source.

capture_server

Default on z/OS: None

Default on Linux®, UNIX, Windows: capture_server=value of DB2DBDFT environment variable, if it is set

The capture_server parameter identifies the database or subsystem where a Q Capture program runs, and where its control tables are typically stored. Because a Q Capture program reads the source database log, in most cases it runs at the source database or subsystem. The exception is when Q Capture runs at a Db2 for z/OS log read proxy server.

Oracle sources: If you do not specify a Q Capture server, this parameter defaults to the value of the ORACLE_SID environment variable.

z/OS: You must specify the capture_server parameter. For data sharing, provide the group attach name instead of a subsystem name so that you can run the replication job in any LPAR.

captureupto (z/OS)

Default: None

Methods of changing: When Q Capture starts; while Q Capture is running

The captureupto parameter specifies a point at which the Q Capture program should stop reading the log, capturing transactions up to that point, and then stop.

You specify captureupto with a full or partial timestamp. The full timestamp uses the following format: YYYY-MM-DD-HH.MM.SS.mmmmmm. For examples of acceptable formats for partial timestamps, see Stopping a Q Capture program at a specified point.

You can also specify the keyword CURRENT_TIMESTAMP and the Q Capture program substitutes the current time, causing it to publish transactions that are committed up to that time. You can also specify the keyword EOL to prompt Q Capture to stop after it reaches the end of the active log.

commit_interval

Default: commit_interval=500 milliseconds (a half second) for Db2 sources; 1000 milliseconds (1 second) for Oracle sources

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The commit_interval parameter specifies how often, in milliseconds, a Q Capture program commits transactions to MQ. By default, a Q Capture program waits 500 milliseconds (a half second) between commits. At each interval, the Q Capture program issues an MQCMIT call. This signals the queue manager to make messages that were placed on send queues available to the Q Apply program or other user applications.

All of the transactions that are grouped within an MQCMIT call are considered to be a MQ unit of work, or transaction. Typically, each MQ transaction contains several database transactions. If the database transaction is large, the Q Capture program will not issue an MQCMIT call even if the commit interval is reached. The Q Capture program will commit only after the entire large database transaction is put on the send queue.

When the number of committed database transactions that are read by a Q Capture program reaches 128, the program issues an MQCMIT call regardless of your setting for commit_interval.

Finding the best commit interval is a compromise between latency (the delay between the time transactions are committed at the source and target databases) and CPU overhead associated with the commit process:

To reduce latency, shorten the commit interval
Transactions will be pushed through with less delay. This is especially important if changes to the source database are used to trigger events. If the number of transactions published per commit interval is high, you might want to have the Q Capture program commit fewer transactions at a time to MQ. See Determining the number of transactions published per commit interval for more detail.
To reduce CPU overhead, lengthen the commit interval
A longer commit interval lets you send as many database transactions as possible for each MQ transaction. A longer commit interval also reduces I/O that is caused by logging of messages. If you lengthen the commit interval, you might be limited by the memory allocated for a Q Capture program, the maximum depth (number of messages) for send queues, and the queue manager's maximum uncommitted messages (MAXUMSGS) attribute. If a Q Capture program waits longer between commits, the publication of some transactions might be delayed, which could increase latency.

droptab_action (z/OS)

Default: droptab_action=w

The droptab_action parameter specifies the action that the Q Capture program takes when it detects that a source table was dropped. By default (droptab_action=w), Q Capture takes no action. The Q subscription for the table remains active, but there are no log records to read for the table. The ASN0197W warning message is issued.

You might prefer to deactivate the Q subscription when a source table is dropped. To do so, specify droptab_action=s. For example, if a source table is dropped and then later recreated, a mismatch can occur between the values in the Db2 system catalog table SYSIBM.SYSTABLES and the replication control table IBMQREP_TABVERSION, which tracks schema changes in source tables. This situation can lead to replication problems when other DDL changes are made to the source table. Specifying that the Q subscription be deactivated enables you to correct any problems before restarting replication, or to delete the Q subscription if it is no longer needed.

IFI_FILTER (z/OS only)

Syntax:IFI_FILTER=Y|N

Default: Y

When this parameter is set, both Q Capture and SQL Capture operations use the Db2 for z/OS IFI306 filter for reading log records. In this log read mode, only the DMS log records for tables that are subscribed by the capture program are returned by Db2. A significant performance boost is seen when replicated table spaces are compressed, as the non-subscribed log records do not have to be decompressed.

References for the IFI_FILTER parameter:

IFI_FILTER_PARTS (z/OS only)

Syntax:IFI_FILTER_PARTS=Y|N

Default: N for Db2 for z/OS 12 and Y for Db2 for z/OS 13.

When this parameter is set, both Q Capture and SQL Capture use Db2 IFI306 log record filtering by partition ranges. It is the preferred method for reading log records with Db2 for z/OS 13. For Db2 for z/OS 12, APAR NNNNNNN is required for filtering by table partitions to be supported by Db2.

Log read filtering gives a significant performance boost when using Q replication subscriptions by partition range, as Db2 returns only the log records for changes to rows in the specified partitions. If you do not use subscriptions by partitions, and you are replicating non-partitioned tables, setting this parameter has no impact on performance. Setting the parameter also uses the recommended interface for Db2 for z/OS 13 when replicating non-partitioned tables.

IFI_FILTER_ADD_CLONE_PSID (z/OS only)

Syntax:IFI_FILTER_ADD_CLONE_PSID=Y|N

asnqcap and asncap startup parameter (for service only). Must set this parameter to Y if you are replicating Db2 for z/OS cloned tables and use the IFI_FILTER option. See IBM Support on IFI_FILTER_ADD_CLONE_PSID. Db2 V12 requires IIDR APAR PH51689 and Db2 V13 requires Db2 APAR PH52122.

ignore_transid

Default: None

Method of changing: When Q Capture starts

The ignore_transid=transaction_ID parameter specifies that the Q Capture program ignores the transaction that is identified by transaction_ID. The transactions are not replicated or published. You can use this parameter if you want to ignore a very large transaction that does not need to be replicated, for example a large batch job. The value for transaction_ID is a 10-byte hexadecimal identifier in the following format:
z/OS
0000:xxxx:xxxx:xxxx:mmmm

Where xxxx:xxxx:xxxx is the transaction ID, and mmmm is the data-sharing member ID. You can find the member ID in the last 2 bytes of the log record header in the LOGP output. The member ID is 0000 if data-sharing is not enabled.

Linux, UNIX, Windows
nnnn:0000:xxxx:xxxx:xxxx

Where xxxx:xxxx:xxxx is the transaction ID, and nnnn is the partition identifier for partitioned databases (this value is 0000 if for non-partitioned databases).

Tip: The shortened version transid is also acceptable for this parameter.

igncasdel

Default: igncasdel=n

Method of changing: When Q Capture starts; at the Q subscription level using replication administration tools (value stored in IBMQREP_SUBS table)

The igncasdel parameter specifies whether the Q Capture program replicates delete operations that result from the delete of parent rows on tables with referential integrity relationships (cascading deletes). You can use this option to reduce the amount of data that needs to be propagated when the delete of the parent row will be cascaded at the target.

By default, when a parent row is deleted Q Capture replicates the delete operations from child rows. The Q Apply program reorders transactions to ensure that no child row is deleted at the target before its parent row is deleted. If you specify igncasdel=y, Q Capture replicates only the delete of the parent row. Use this option to avoid redundant delete operations by the Q Apply program when replication of the parent row delete would cause cascading deletes at the target table.

You can also specify this option at the Q subscription level by using the replication administration tools to change the value of the IGNCASDEL column in the IBMQREP_SUBS table from the default of N to Y. If you specify Y in the IBMQREP_SUBS table, the setting for the igncasdel parameter is overridden for the individual Q subscription.

ignsetnull

Default: igntrig=n

Method of changing: When Q Capture starts; at the Q subscription level using replication administration tools (value stored in IBMQREP_SUBS table)

The ignsetnull parameter specifies that the Q Capture program should not replicate UPDATE operations that result from the deletion of parent rows in tables with referential integrity relationships when the ON DELETE SET NULL rule is in effect.

By default, when a parent row is deleted and the ON DELETE SET NULL rule is in effect, Q Capture replicates these UPDATE operations in which one or more column values are set to NULL. If ON DELETE SET NULL is in effect at the target, you can set ignsetnull=y and Q Capture ignores these UPDATE operations.

You can also specify this option at the Q subscription level by using the replication administration tools to change the value of the IGNSETNULL column in the IBMQREP_SUBS table from the default of N to Y. If you specify Y in the IBMQREP_SUBS table, the setting for the ignsetnull parameter is overridden for the individual Q subscription.

igntrig

Default: igntrig=n

Method of changing: When Q Capture starts; at the Q subscription level using replication administration tools (value stored in IBMQREP_SUBS table)

The igntrig parameter specifies that the Q Capture program should discard trigger-generated rows. When a trigger on the source table generates a secondary SQL statement after an SQL operation, both the initial and secondary SQL statements are replicated to the target. These secondary statements create conflicts because the trigger on the target table generates the same rows when source changes are applied. Setting igntrig=y prompts the Q Capture program to not capture any trigger-generated SQL statements.

You can also specify the option at the Q subscription level by changing the value of the IGNTRIG column in the IBMQREP_SUBS table from the default of n to y. If you specify Y in the IBMQREP_SUBS table, the setting for the igntrig parameter is overridden for the individual Q subscription.

If you use igntrig=y, be sure to define triggers on target tables as identical to triggers on the source table. Failure to do so can result in conflicts when Q Apply updates a target table that has a non-matching trigger.

You can also specify this option at the Q subscription level by using the replication administration tools to change the value of the IGNTRIG column in the IBMQREP_SUBS table from the default of N to Y. If you specify Y in the IBMQREP_SUBS table, the setting for the igntrig parameter is overridden for the individual Q subscription.

Note about INSTEAD OF triggers: If the source table is updated by INSTEAD OF triggers on a view and the igntrig parameter is set to y, the Q Capture program does not replicate the change to the source table.

lob_send_option

Default: lob_send_option=I

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The lob_send_option parameter specifies whether the Q Capture program sends large object (LOB) values inline (I) within a transaction message or in a separate message (S). By default (lob_send_option=I) LOB values are sent inline if possible to improve performance. If the size of a LOB value causes the transaction message to exceed the max_message_size limit for the send queue, the LOB is sent in a separate message and all LOB values in the same transaction row are also sent separately. If you are replicating LOB data, the value of max_message_size determines how often the Q Capture program accesses the source table to fetch the LOB data in multiple chunks (one chunk per message). A low maximum message size can impede Q Capture performance in replicating or publishing LOB data.

If you set lob_send_option=S to have LOB values sent in a separate LOB message, the LOB values might not fit into a single LOB message. If the size of the LOB value exceeds the max_message_size value for the Q Capture program, then the LOB message is divided into two or more smaller messages. If you expect to replicate or publish many LOB values or BLOB values, allocate sufficient memory and storage, and set the queue depth accordingly.

When the Q Capture program is using the separate LOB mode, LOB values for all LOB columns that are part of a Q subscription or publication are sent for every row in a transaction. This behavior results in more MQ messages if a LOB value is updated multiple times in a transaction or if the CHANGED_COLS_ONLY option in the Q subscription or publication is set to N.

LOB values for all LOB columns that are part of a Q subscription or publication are sent for key updates regardless of the CHANGED_COLS_ONLY setting.

On Linux, UNIX, and Windows, the Q Capture program can fetch LOB data directly from the Db2 recovery log. See Improving performance when replicating LOB data for more details.

log_commit_interval

Default: log_commit_interval=30 seconds

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The log_commit_interval parameter specifies an interval in seconds for how often the Q Capture log reader thread commits. The default is every 30 seconds. Shortening the value might help you avoid contention between Q Capture and other DDL operations on the source database. A new S DBD lock was added for IFCID 306 in Db2 APAR PI12599 (RSU1406 in Db2) for Db2 10 and 11. After applying the APAR, you might see locking issues with DDL operations on compressed tables because the replication capture process holds the IFCID lock. If you detect that Q Capture is creating contention with other DDL operations on the source database, you can shorten this value so that Q Capture commits, and releases any locks, more frequently.

logrdbufsz

Default: logrdbufsz=66KB for z/OS; 256KB for Linux, UNIX, and Windows

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The logrdbufsz parameter specifies the size of the buffer in KB that the Q Capture program passes to Db2 when Q Capture retrieves log records. Db2 fills the buffer with available log records that Q Capture has not retrieved. For partitioned databases, Q Capture allocates a buffer of the size that is specified by logrdbufsz for each partition.

The default values should be optimal for most situations. However, you may want to increase this value if you have a high volume of data changes and sufficient memory available.

logrd_error_action

Default: logrd_error_action=D

Method of changing: When Q Capture starts

The logrd_error_action parameter specifies the action that the Q Capture program takes when it encounters a compression dictionary error while reading the recovery log. A "permanent" error means that the table space was reorganized twice with KEEPDICTIONARY=NO. A "transient" error means all other compression dictionary errors.

D (default)
For a permanent dictionary error, deactivate the Q subscription. For a transient dictionary error, stop the Q Capture program.
E
For either type of error, deactivate the Q subscription.
S
For either type of error, stop the Q Capture program.

logread_prefetch (Linux, UNIX, Windows)

Default: logread_prefetch=y for partitioned databases; n for nonpartitioned databases

Method of changing: When Q Capture starts

The logread_prefetch parameter specifies whether the Q Capture program uses separate threads to prefetch log records from each partition in a partitioned database. By default (logread_prefetch=y), Q Capture uses separate log-reader threads to connect to each partition. Using separate threads can increase Q Capture throughput but might increase CPU usage.

If you specify logread_prefetch=n, a single Q Capture log reader thread connects to all partitions.

logreuse

Default: logreuse=N

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

Each Q Capture program keeps a diagnostic log file that tracks its work history, such as start and stop times, parameter changes, errors, pruning, and the points where it left off while reading the database log.

By default, the Q Capture program adds to the existing log file when the program restarts. This default lets you keep a history of the program's actions. If you don't want this history or want to save space, set logreuse=y. The Q Capture program clears the log file when it starts, then writes to the blank file.

The log is stored by default in the directory where the Q Capture program is started, or in a different location that you set using the capture_path parameter.

z/OS: The log file name is capture_server.capture_schema.QCAP.log. For example, SAMPLE.ASN.QCAP.log. Also, if capture_path is specified with slashes (//) to use a High Level Qualifier (HLQ), the file naming conventions of z/OS sequential data set files apply, and capture_schema is truncated to eight characters.

Linux, UNIX, Windows: The log file name is db2instance.capture_server.capture_schema.QCAP.log. For example, Db2.SAMPLE.ASN.QCAP.log.

Oracle sources: The log file name is capture_server.capture_schema.QCAP.log. For example, ORASAMPLE.ASN.QCAP.log.

logstdout

Default: logstdout=n

By default, a Q Capture program writes its work history only to the log. You can change the logstdout parameter if you want to see the program's history on the standard output (stdout) in addition to the log.

Error messages and some log messages (initialization, stop, subscription activation, and subscription deactivation) go to both the standard output and the log file regardless of the setting for this parameter.

lsn

Default: None

Method of changing: When Q Capture starts

The lsn parameter specifies the log sequence number at which the Q Capture program starts during a warm restart. You specify both the lsn and maxcmtseq parameters to start Q Capture from a known point in the Db2 log. When specifying the lsn parameter, you also must specify the maxcmtseq parameter in the same command invocation, and you cannot use these parameters if the value of startmode is cold.

This value represents the earliest log sequence number that the Q Capture program found for which a commit or abort record has not yet been found. You can obtain the value for lsn from the restart message by using the asnqmfmt command. You can also use the value in the RESTART_SEQ column of the IBMQREP_CAPMON table. If you use the latter method, choose an entry in the monitor table that is older than the time that Q Capture stopped to ensure that any lost messages are recaptured.

To start from the end of the log without triggering a load (full refresh) of the target table, specify one of the following values, depending on your Db2 version:
Version 9.7 and below
lsn=FFFF:FFFF:FFFF:FFFF:FFFF and maxcmtseq=FFFF:FFFF:FFFF:FFFF:FFFF.
Version 10.1 or higher with compatibility of 1001 or higher, or Version 9.8
lsn=FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF and maxcmtseq=FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF.

You can also specify lsn and maxcmtseq without colons to save space.

DB2 for z/OS data sharing: If the source database is in a Db2 for z/OS data-sharing group, you can specify a timestamp with the lsn and maxcmtseq parameters and the Q Capture program will start reading from the log record that corresponds to the timestamp. The timestamp must be in 24-hour Coordinated Universal Time (UTC), formerly known as Greenwich Mean Time (GMT), and not local time. The format is yyyy-mm-dd-hh.mm.ss.nnnnnn.

For example, to start reading the log from 1:30:01.000001 p.m. on Sept. 4, 2015, and commit the first transaction that is seen by Q Capture:

lsn=2015-09-04-13.30.01.000001 maxcmtseq=0
To start reading the log from 1:30:01.000001 p.m. on Sept. 4, 2015 when you know that Q Capture has already published up to LSN 00CE:1234:5678:9123:4500:0001:
lsn=2015-09-04-13.30.01.000001 maxcmtseq=00CE:1234:5678:9123:4500:0000:0000:0001

Restriction: The lsn parameter is invalid when the source is a partitioned database.

max_trans

Default: max_trans=200

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The max_trans parameter specifies the maximum number of database transactions that Q Capture can publish during a single commit interval, as specified by the commit_interval parameter. The value that you set limits the number of MQ messages that Q Capture puts on send queues during the commit interval and can be used to tune MQ throughput. If the value is too low MQ publishing might slow; if the value is too high the commit interval might expire before the max_trans limit is reached. The default value of 200 should work well for most environments.

max_capstarts_intload

Default: max_capstarts_intload=0

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The max_capstarts_intload parameter specifies the maximum number of Q subscriptions that specify an internal load that can be started at the same time. When you specify a value of n, Q Capture delays activation of Q subscriptions to ensure that no more than n Q subscriptions are being started. You can use this parameter to limit excessive use of resources and shorten the time it takes to load and start replicating a large number of tables on a busy database. When a table subscription is started, Q Capture starts reading the log records for the table and sends them to Q Apply, which creates a spill queue for holding those log records until the load is done. When the load is done, the backlog of spilled changes is applied. Limiting the number of concurrent activations controls the number of spill queues that need to be created and the number of log records that might need to be spilled and then applied after the load is done. With max_capstarts_intload, Q Capture staggers the subscription activations, reducing the time for starting a large number of subscriptions on an active database. The default and minimum value is 0; there is no maximum value.

maxcmtseq

Default: None

Method of changing: When Q Capture starts

The maxcmtseq parameter is used to specify the commit log record position of the last transaction that was successfully sent by the Q Capture program before shutdown. You can specify both the maxcmtseq and lsn parameters to start Q Capture from a known point in the Db2 log. When specifying the maxcmtseq parameter, you also must specify the lsn parameter in the same command invocation, and you cannot use these parameters if the value of startmode is cold.

The value of maxcmtseq is an internal log marker that is different for each type of database system. The marker is encoded as a 10- or 16-character string:

z/OS
On z/OS, the value is the LSN of the commit log record, to which Q Capture might append a sequence number because on z/OS with data sharing several log records might have the same LSN.
Linux, UNIX, Windows
The value depends on the database level and configuration:
Version 9.8 and higher
An LSN of a commit log record, which is encoded as an 8-byte Log Flush Sequence followed by an 8-bye Log Sequence Offset (these two values combined are referred to in Db2 as a Log Record Indicator, or LRI).
Version 9.7 and lower or all versions with Db2 Database Partitioning Feature (DPF)
A timestamp with nanosecond precision that uniquely identifies a transaction. The value is encoded as two integers, seconds, and nanoseconds.

You can find the value for maxcmtseq in one of these places:

  • From the restart message, by using the asnqmfmt command
  • From the Q Capture output log file, messages ASN7108I and ASN7109
  • From the IBMQREP_APPLYMON table (OLDEST_COMMIT_LSN for z/OS sources and Linux, Unix, and Windows sources V9.8 and later; OLDEST_COMMIT_SEQ for Linux, UNIX, and Windows sources V9.7 and earlier or for any version with DPF enabled).
To start from the end of the log without triggering a load (full refresh) of the target table, specify one of the following values, depending on your Db2 version:
Version 9.7 and below
lsn=FFFF:FFFF:FFFF:FFFF:FFFF and maxcmtseq=0000:0000:0000:0000:0000
Version 10.1 or higher with compatibility of 1001 or higher, or Version 9.8
lsn=FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF and maxcmtseq=0000:0000:0000:0000:0000:0000:0000:0000

You can also specify lsn and maxcmtseq without colons to save space.

DB2 for z/OS data sharing: If the source database is in a Db2 for z/OS data-sharing group, you can specify a timestamp with the lsn and maxcmtseq parameters and the Q Capture program will start reading from the log record that corresponds to the timestamp. The timestamp must be in 24-hour Coordinated Universal Time (UTC), formerly known as Greenwich Mean Time (GMT), and not local time. The format is yyyy-mm-dd-hh.mm.ss.nnnnnn.

For example, to start reading the log from 1:30:01.000001 p.m. on Sept. 4, 2015, and commit the first transaction that is seen by Q Capture:

lsn=2015-09-04-13.30.01.000001 maxcmtseq=0
To start reading the log from 1:30:01.000001 p.m. on Sept. 4, 2015 when you know that Q Capture has already published up to LSN 00CE:1234:5678:9123:4500:0001:
lsn=2015-09-04-13.30.01.000001 maxcmtseq=00CE:1234:5678:9123:4500:0000:0000:0001

memory_limit

Default: memory_limit=0 on z/OS; 500 MB on Linux, UNIX, and Windows

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The memory_limit parameter specifies the amount of memory that a Q Capture program can use to build database transactions in memory. By default, the memory_limit is set to 0 on z/OS and the Q Capture program calculates a memory allocation that is based on the Q Capture region size in the JCL or started task. On Linux, UNIX, and Windows, a Q Capture program uses a maximum of 500 MB by default. When the memory amount allocated by this parameter is used, a Q Capture program spills in-memory transactions to a file that is located in the capture_path directory. On z/OS, the Q Capture program spills to VIO or to the file that is specified in the CAPSPILL DD card.

The maximum allowed value for this parameter is 100 GB.

You can adjust the memory limit based on your needs:

To improve the performance of a Q Capture program, increase the memory limit
If your goal is higher throughput, maximize the memory limit whenever possible.
To conserve system resources, lower the memory limit
A lower memory limit reduces competition with other system operations. However, setting the memory limit too low will use more space on your system for the spill file and prompt more I/O that can slow your system.

You can use data in the IBMQREP_CAPMON table to find the best memory limit for your needs. For example, check the value for CURRENT_MEMORY to see how much memory a Q Capture program is using to reconstruct transactions from the log. Or, check the value for TRANS_SPILLED to find out how many transactions a Q Capture program spilled to a file when it exceeded the memory limit. You can use the Q Capture Throughput window in the Replication Center to check these values. See the Replication Center online help for details.

migrate (Linux, UNIX, Windows)

Default: migrate=y

Method of changing: When Q Capture starts

The migrate parameter specifies that the Q Capture program starts from the beginning of the log after Db2 for Linux, UNIX, and Windows is upgraded. Use this option after you upgrade Db2 to a new release, such as Version 9.5 or Version 9.7. The migrate parameter is not required after you upgrade Db2 with a fix pack.

Use the migrate parameter only the first time that Q Capture is started and specify startmode=warmns.

monitor_interval

Default: monitor_interval=60000 milliseconds (1 minute) on z/OS and 30000 milliseconds (30 seconds) on Linux, UNIX, and Windows

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The monitor_interval parameter tells a Q Capture program how often to insert performance statistics into two of its control tables. The IBMQREP_CAPMON table shows statistics for overall Q Capture program performance, and the IBMQREP_CAPQMON table shows Q Capture program statistics for each send queue.

By default, rows are inserted into these tables every 300000 milliseconds (5 minutes). Typically, a Q Capture program commits IBM® MQ transactions at a much shorter interval (the default commit interval is a half second). Thus, if you use shipped defaults for the monitor interval and commit interval, each insert into the monitor tables contains totals for 120 commits. If you want to monitor Q Capture activity at a more granular level, use a monitor interval that is closer to the commit interval.

Important for Q Replication Dashboard users: When possible, you should synchronize the Q Capture monitor_interval parameter with the dashboard refresh interval (how often the dashboard retrieves performance information from the Q Capture and Q Apply monitor tables). The default refresh interval for the dashboard is 10 seconds (10000 milliseconds). If the value of monitor_interval is higher than the dashboard refresh interval, the dashboard refreshes when no new monitor data is available.

monitor_limit

Default: monitor_limit=10080 minutes (7 days)

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The monitor_limit parameter specifies how old the rows must be in the IBMQREP_CAPMON and IBMQREP_CAPQMON tables before they are eligible for pruning.

By default, rows that are older than 10080 minutes (7 days) are pruned. The IBMQREP_CAPMON and IBMQREP_CAPQMON tables contain statistics about a Q Capture program's activity. A row is inserted at each monitor interval. You can adjust the monitor limit based on your needs:

Increase the monitor limit to keep statistics
If you want to keep records of the Q Capture program's activity beyond one week, set a higher monitor limit.
Lower the monitor limit if you look at statistics frequently
If you monitor the Q Capture program's activity on a regular basis, you probably do not need to keep one week of statistics and can set a lower monitor limit, which prompts more frequent pruning.

mqthread_bufsz

Default: mqthread_bufsz=0

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The mqthread_bufsz parameter specifies the size in kilobytes (KB) of a memory buffer that can be used to stage messages before they are written to MQ. When you specify parallel_mqthread=y, Q Capture uses a separate thread to publish messages from the MQ buffer, potentially improving performance. The value of mqthread_bufsz must be at least as large as the smallest message that is put to MQ. The minimum non-zero value is 50 KB, the maximum value 100 MB.

msg_persistence

Default: msg_persistence=y

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The msg_persistence parameter specifies whether a Q Capture program writes persistent (logged) or nonpersistent (unlogged) data messages to MQ queues. (Data messages contain replicated data from source tables.) By default, Q Capture uses persistent data messages. The queue manager logs persistent messages and can recover the messages after a system failure or restart.

In some cases you might want to avoid the CPU and storage overhead of persistent messages. In that case, specify msg_persistence=n.

The Q Capture and Q Apply program always put persistent messages onto their administration queues. Q Capture always puts persistent messages onto its restart queue. Therefore logging of messages on these queues is not affected by the setting for msg_persistence.

If you created a send queue with the DEFPSIST(YES) option so that the queue carries persistent messages by default, you can still specify msg_persistence=n and Q Capture sends nonpersistent messages, which overrides the queue default.

You can also specify msg_persistence=d and Q Capture honors the MQ default persistence setting for each send queue, sending persistent messages for queues that are created with DEFPSIST(YES) and nonpersistent messages for queues that are created with DEFPSIST(NO). This setting can be useful for non-critical applications when you are willing to trade off recoverability for better performance (high throughput).

If you choose nonpersistent messages and data messages are lost because of a problem that forces the queue manager to restart, you need to do one of the following things to keep the source and target tables synchronized:

  • Stop and start the Q Capture program and specify a known point in the recovery log so that Q Capture rereads the log at a point before the messages were lost.
  • Stop and start the Q subscription to prompt a new load (full refresh) of the target table.

nmi_enable (z/OS)

Default: nmi_enable=n

Method of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The nmi_enable parameter specifies whether the Q Capture program is enabled to provide a Network Management Interface (NMI) for monitoring Q Replication statistics from IBM Tivoli® NetView® Monitoring for GDPS®. The NMI client application must be on the same z/OS system as the Q Capture program. By default (nmi_enable=n), the interface is not enabled.

When you specify nmi_enable=y, the Q Capture program acts as an NMI server and listens on a socket that is specified by the nmi_socket_name parameter for client connection requests and data requests. Q Capture can support multiple client connections, and has a dedicated thread to interact with NMI clients. The thread responds to requests in the order that they arrived.

nmi_socket_name (z/OS)

Default: nmi_socket_name=/var/sock/group-attach-name_capture-schema_asnqcap

Method of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The nmi_socket_name parameter specifies the name of the AF_UNIX socket where the Q Capture program listens for requests for statistical information from NMI client applications. You can specify this parameter to change the socket name that the program automatically generates. The socket file is generated in the directory /var/sock. The socket name is constructed by combining the file path, group attach name, Q Capture schema name, and the program name (asnqcap). An example socket name is /var/sock/V91A_ASN_asnqcap.

To use this parameter you must specify nmi_enable=y.

After a Q Capture program is started with either a default or a user-defined NMI socket name, the name cannot be changed dynamically.

override_restartq

Default: override_restartq=n

Method of changing: When Q Capture starts

The override_restartq parameter specifies whether the Q Capture program gets its warm start information from a data set or file rather than from the restart message. By default, Q Capture gets its restart information from the restart message. If you specify override_restartq=y, Q Capture reads the restart file when it starts, and gets the starting point for each send queue and data partition from the file. The file is saved in the directory that is specified by the capture_path parameter, or in the directory from which Q Capture was started if nothing is specified for capture_path. The data set or file name differs by platform:

z/OS
capture_server.capture_schema.QCAP.QRESTART
Linux, UNIX, Windows
db2_instance.capture_server.capture_schema
.QCAP.QRESTART
Note: To use override_restartq=y, the user ID that starts Q Capture must have the authorization to open and read from the restart file.

z/OS: You can also specify override_restartq=d to prompt the Q Capture program to delete the current restart message when Q Capture starts and generate a new restart message that is based on the contents of the restart file. You should use this option for upgrades and migrations because the Q Capture program cannot read old restart messages.

parallel_mqthread (z/OS)

Default: parallel_mqthread=n

Method of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The parallel_mqthread parameter is used on z/OS to specify whether the Q Capture program should use a separate thread to publish messages to MQ. By default (parallel_mqthread=n), the Q Capture worker thread performs all MQ operations in addition to transforming completed transactions from the log into MQ messages. With parallel_mqthread=y, a separate MQ publish thread is used to put messages to MQ. When this option is specified, Q Capture also creates a separate memory buffer that the publish thread uses to stage MQ messages. The size of the buffer is governed by the mqthread_bufsz parameter.

Using a separate publish thread can potentially improve throughput performance by double buffering MQ operations, allowing the worker thread to format one buffer of MQ messages while the publish thread writes the previous buffer to the various send queues.

part_hist_limit (Linux, UNIX, Windows)

Default: part_hist_limit=10080 minutes (seven days)

Method of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The part_hist_limit parameter specifies how long you want old data to remain in the IBMQREP_PART_HIST table before the data becomes eligible for pruning. This parameter also controls how far back in the log you can restart the Q Capture program because Q Capture uses IBMQREP_PART_HIST to determine what log records to read for a partitioned source table.

The Q Capture program uses the partition history that is stored in the IBMQREP_PART_HIST table to help handle partition changes such as add, attach, or detach. One row, identified by a log sequence number (LSN), is inserted for each partition in the source table on two occasions:

  • The first Q subscriptions or subscription-set member for the table is activated.
  • The table is altered to add, attach, or detach a partition.

If you are replicating partitioned tables on Linux, UNIX, or Windows, ensure that the part_hist_limit value is large enough to allow for possible recapture of past log records in the case of a system failure or other outage.

prune_interval

Default: prune_interval=300 seconds (5 minutes)

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The prune_interval parameter determines how often a Q Capture program looks for eligible rows to prune from the IBMQREP_CAPMON, IBMQREP_CAPQMON, IBMQREP_SIGNAL, and IBMQREP_CAPTRACE tables. By default, a Q Capture program looks for rows to prune every 300 seconds (5 minutes).

Your pruning frequency depends on how quickly these control tables grow, and what you intend to use them for:

Shorten the prune interval to manage monitor tables
A shorter prune interval might be necessary if the IBMQREP_CAPMON and IBMQREP_CAPQMON tables are growing too quickly because of a shortened monitor interval. If these and other control tables are not pruned often enough, they can exceed their table space limits, which forces a Q Capture program to stop. However, if the tables are pruned too often or during peak times, pruning can interfere with application programs that run on the same system.
Lengthen the prune interval for record keeping
You might want to keep a longer history of a Q Capture program's performance by pruning the IBMQREP_CAPTRACE and other tables less frequently.

The prune interval works in conjunction with the trace_limit, monitor_limit, and signal_limit parameters, which determine when data is old enough to prune. For example, if the prune_interval is 300 seconds and the trace_limit is 10080 seconds, a Q Capture program will try to prune every 300 seconds. If the Q Capture program finds any rows in the IBMQREP_CAPTRACE table that are older than 10080 minutes (7 days), it prunes them.

pwdfile

Default: pwdfile=capture_path/asnpwd.aut

Method of changing: When Q Capture starts

The pwdfile parameter specifies the name of the password file that is used to connect to multiple-partition databases. If you do not use the pwdfile parameter to specify the name of a password file, the Q Capture program looks for a file with the name of asnpwd.aut in the directory that is specified by the capture_path parameter. If no capture_path parameter is specified, this command searches for the password file in the directory from which the command was invoked.

You can create a password file by using the asnpwd command. Use the following example to create a password file with the default name of asnpwd.aut in the current directory: asnpwd INIT.

qfull_num_ retries

Default: qfull_num_ retries=30

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

You can use the qfull_num_ retries parameter to specify the number of times that a Q Capture program tries to put a message on a queue when the initial MQPUT operation fails because the queue is full. The default is 30 retries, and the maximum is 1,000 retries. A value of 0 instructs the Q Capture program to stop whenever an MQPUT operation fails. The alternate syntax for the qfull_num_retries parameter is RN.

For example, the following command specifies that a Q Capture program retry the MQPUT operation 100 times before stopping:

asnqcap capture_server=SAMPLE capture_schema=ASN1 qfull_num_retries=100

qfull_retry_delay

Default: qfull_retry_delay=250 milliseconds

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

You can use the qfull_retry_delay parameter to specify how long in milliseconds the Q Capture program waits between MQPUT attempts when the initial MQPUT operation fails because the queue is full. The allowed value range is 10 milliseconds to 3600000 milliseconds (1 hour). The default delay is 250 milliseconds or the value of the commit_interval parameter, whichever is less. (The default for commit_interval is 500 milliseconds.) If the specified value is larger than the commit_interval value, it will be set lower to the commit_interval value. The alternate syntax for the qfull_retry_delay parameter is RD.

For example, the following command specifies that a Q Capture program retry the MQPUT operation 50 times before stopping, with a delay of 10000 milliseconds (10 seconds):

asnqcap capture_server=SAMPLE capture_schema=ASN1 qfull_num_retries=50
qfull_retry_delay=10000

reinit_on_outorder_log

Default: reinit_on_outorder_log=n

The reinit_on_outorder_log parameter specifies the action that the Q Capture program takes when it detects that out-of-order log records are returned by the Db2 log read API (a lower LSN following a higher LSN). This is an abnormal and unexpected situation that can cause the Q Capture program to not replicate changes that involve the out-of-sequence log records, which can potentially result in data loss at the target table because Q Capture skips the out-of-order records.

By default (reinit_on_outorder_log=n), when Q Capture detects out-of-order log records, it issues warning message ASN0725W, adds diagnostic data to the log, and stops.

Important: Normally you would not set this parameter unless instructed by IBM Support. If you get an out-of-order message from Q Capture, contact support.

You can specify the runtime parameter reinit_on_outorder_log=y. With this setting, when Q Capture detects out-of-order log records it follows these steps:

  • First detection: Issues warning message ASN0725W and adds diagnostic data to the log, rolls back any uncommitted MQ transactions, then performs a reinit and restarts log reading from a previous point in the log.
  • Second detection: Issues warning message ASN0725W and adds diagnostic data to the log, then stops.

Specifying reinit_on_outorder_log=y is a workaround that allows the Q Capture program to continue if an issue occurs with the log read interface. In many cases, when Q Capture restarts reading the log from a previous point, the log records are returned in the correct order. Nonetheless it is recommended that you verify database integrity by using the asntdiff utility to compare the affected source and target tables for discrepancies.

reprocess_signals

Default: reprocess_signals=n

Methods of changing: When Q Capture starts

The reprocess_signals parameter specifies whether the Q Capture program, when restarted from a historical position in the recovery log, reprocesses IBMQREP_SIGNAL table inserts that Q Capture saw and processed on its earlier run through the log.

By default, Q Capture only processes signals once and ignores them if they are encountered again. In some cases, you might need to replay an entire workload, for example when testing. In this scenario, you can specify that all previous signals be reprocessed by specifying reprocess_signals=y.

Using reprocess_signals=n enables you to start Q Capture from a previous point in the log and have the program ignore signals that it already processed such as CAPSTART, CAPSTOP, or STOP.

signal_limit

Default: signal_limit=10080 minutes (7 days)

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The IBMQREP_SIGNAL table contains signals inserted by a user or a user application. It also contains corresponding signals t'at are inserted by a Q Capture program after it receives control messages from the Q Apply program or a user application. The Q Capture program sees the signals when it reads the log record for the insert into the IBMQREP_SIGNAL table.

These signals tell a Q Capture program to stop running, to activate or deactivate a Q subscription or publication, to ignore a database transaction in the log, or to invalidate a send queue. In addition, the LOADDONE signal tells a Q Capture program that a target table is loaded.

You can adjust the signal limit depending on your environment. Shorten the limit to manage the size of the IBMQREP_SIGNAL table. For bidirectional Q Replication, the Q Apply program inserts a signal into the IBMQREP_SIGNAL table for every transaction that it receives and applies to make sure that the Q Capture program does not recapture the transaction. If you have a large number of bidirectional Q subscriptions, the table might grow large and you might want to lower the default signal limit so that it can be pruned more frequently. Lengthen the limit to use this table for record-keeping purposes.

sleep_interval

Default: sleep_interval=500 milliseconds (0.5 seconds) for Db2 sources; 2000 milliseconds (2 seconds) for Oracle sources

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The sleep_interval parameter specifies the number of milliseconds that a Q Capture program waits after reaching the end of the active log and assembling any transactions that remain in memory.

By default, a Q Capture program sleeps for 500 milliseconds (0.5 seconds). After this interval, the program starts reading the log again. You can adjust the sleep interval based on your environment:

Lower the sleep interval to reduce latency
A smaller sleep interval can improve performance by lowering latency (the time that it takes for a transaction to go from source to target), reducing idle time, and increasing throughput in a high-volume transaction environment.
Increase the sleep interval to save resources
A larger sleep interval gives you potential CPU savings in an environment where the source database has low traffic, or where targets do not need frequent updates.

stale

Default: stale=3600

Method of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The stale parameter specifies the number of seconds that the Q Capture program waits to issue a warning message or take other action after it detects a long-running transaction with no commit or rollback log record. The program behavior depends on the platform of the source. On z/OS, Q Capture issues warning messages if has not seen a commit or rollback record for one hour (stale=3600). On both z/OS and Linux, UNIX, and Windows, if a transaction has been running for the number of seconds that are specified by stale and Q Capture did not see any row operations in the log for the transaction, it issues warning messages and increments the log sequence number that it considers to be the oldest "in-flight" transaction that was not committed or rolled back. If some rows were captured for the transaction, only warning messages are issued.

startallq

Default: startallq=y

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The startallq parameter specifies whether the Q Capture program activates all send queues during startup. You can use this parameter to keep a disabled send queue inactive.

By default, when Q Capture starts it activates all send queues that are not already in active (A) state. If you specify startallq=n, when Q Capture starts it does not activate send queues that are in inactive (I) state.

startmode

Default: startmode=warmsi

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The startmode parameter specifies the steps that a Q Capture program takes when it starts. The program starts in either warm or cold mode. With a warm start, the Q Capture program continues capturing changes where it left off after its last run (there are three types of warm start). If you choose cold start, the program starts reading at the end of the log. Choose from one of the four start modes, depending on your environment:

cold
The Q Capture program clears the restart queue and administration queue, and starts processing all Q subscriptions or publications that are in N (new) or A (active) state. With a cold start, the Q Capture program starts reading the Db2 recovery log or Oracle redo log at the end.

Generally, use a cold start only the first time that you start a Q Capture program. Warmsi is the recommended start mode. You can use a cold start if this is not the first time that you started a Q Capture program, but you want to begin capturing changes from the end of the active log instead of from the last restart point. On a cold start, Q Capture stops and then starts unidirectional Q subscriptions that have a load phase specified. For bidirectional and peer-to-peer replication, Q Capture only stops the Q subscriptions on a cold start. To start these Q subscriptions, you must use the replication administration tools or insert a CAPSTART signal for each Q subscription that you want to start.

Important: To avoid unwanted cold starts, be sure that this start mode is not specified in the IBMQREP_CAPPARMS table.
warmsi (warm start; switch first time to cold start)
The Q Capture program starts reading the log at the point where it left off, except if this is the first time that you are starting it. In that case the Q Capture program switches to a cold start. The warmsi start mode ensures that a Q Capture program starts in cold mode only when it initially starts.
warmns (warm start; never switch to cold start)
The Q Capture program starts reading the log at the point where it left off. If it cannot warm start, it does not switch to cold start. Use this start mode to prevent a Q Capture program from cold starting unexpectedly. This start mode allows you to repair problems (such as unavailable databases or table spaces) that are preventing a warm start. With warmns, if a Q Capture program cannot warm start, it shuts down and leaves all tables intact.

During warm starts, the Q Capture program will only load those Q subscriptions or publications that are in not in I (inactive) state.

suppress_empty_noval

Default: suppress_empty_noval=n

Methods of changing: When Q Capture starts

The suppress_empty_noval parameter can be used to suppress replication of updates that set a column default to an empty value after an ADD COLUMN operation without a REORG.

By default (suppress_empty_noval=n), when you make changes to a new column to set its default to an empty value, because the before and after values are not the same in the log record, the change is replicated. Replication of these changes is typically not meaningful and can result in replication of huge transactions.

If you specify suppress_empty_noval=y, when a column with a default value is added to a table and later changed to either a string of length 1 that contains a single space or length 0, or a 0 value for numeric types, the change is not replicated.

The suppress_empty_noval parameter applies to VARCHAR, DECIMAL, INTEGER, GRAPHIC, and BIGINT data types.

In Db2, when you add a new column with a default value, the column is added to the schema of the table, but each row is not physically updated on disk with the default value until a REORG operation is performed or the row is updated. An application might use a Db2 MERGE statement to initialize the new columns to a value that indicates no value (0 for integer and decimal, " " for the VARCHAR data types). Using the MERGE statement is much faster than a REORG because it only initializes the column and does not move data pages on disk. However, Db2 writes an update log record for each row in which the before value is either a NULL or empty value, and the after value can be 0 for numeric types, and either a string of length 0 or a string of length 1 that contains the single space character for string types.

You might not want such changes to be replicated because the transaction is so large that it exhausts system resources.

term

Default: term=y

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The term parameter controls whether a Q Capture program keeps running when Db2 or the queue manager are unavailable.

By default (term=y), the Q Capture program terminates when Db2 or the queue manager are unavailable. When term=y is specified and Db2 is unavailable, Q Capture terminates with a non-zero return code. You can change the default (term=n) if you want a Q Capture program to keep running while Db2 or the queue manager are unavailable. When Db2 or the queue manager are available, Q Capture begins sending transactions where it left off without requiring you to restart the program.

Restrictions:
  • Regardless of the setting for term, if the MQ sender or receiver channels stop, the Q Capture program keeps running because it cannot detect channel status. This situation causes replication to stop because the two queue managers cannot communicate. If you find that replication has stopped without any messages from the Q Replication programs, check the channel status by using the MQ DISPLAY CHSTATUS command.
  • If the error action for the send queue is to stop (ERROR_ACTION column value of S in the IBMQREP_SENDQUEUES table), the Q Capture program stops on a send queue error even if term=n.

trace_limit

Default: trace_limit=10080 minutes (7 days)

Methods of changing: When Q Capture starts; while Q Capture is running; IBMQREP_CAPPARMS table

The trace_limit parameter specifies how long rows remain in the IBMQREP_CAPTRACE table before they can be pruned.

The Q Capture program inserts all informational, warning, and error messages into the IBMQREP_CAPTRACE table. By default, rows that are older than 10080 minutes (7 days) are pruned at each pruning interval. Modify the trace limit depending on your need for audit information.

trans_batch_info

Default: trans_batch_info=n

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The trans_batch_info parameter determines whether Q Capture sends transaction information for all transactions in a batch or for only the last transaction. This parameter is only applicable if trans_batch_sz is set to a value greater than 1.

By default, trans_batch_info=n, Q Capture sends the SRC_TRANS_ID, AUTHID, AUTHTOKEN, and PLANNAME for only the last transaction. If an exception or error occurs while applying a row to the target, the transaction information for the error in the IBMQREP_EXCEPTIONS table might not be reliable. With trans_batch_info=y, the exception report contains the correct COMMITSEQ and URID information, and for consistent-change data (CCD) target tables the values for the IBMSNAP_COMMITSEQ and IBMSNAP_LOGMARKER columns are accurate.

This option adds about 86 bytes to each transaction within each MQ message. So for example if trans_batch_sz=10, each message would be about 860 bytes larger.

trans_batch_sz

Default: trans_batch_sz=1

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The trans_batch_sz parameter specifies the number of source database transactions that Q Capture groups together in a MQ message. You can use this parameter to reduce CPU consumption at the source when the replication workload typically consists of small transactions, for example one or two rows.

The default is 1 (no batching) and the maximum batch size is 128. Q Capture respects the value of trans_batch_sz unless the Q Capture commit interval is reached before the number of transactions reaches this value, or unless the total size of transactions in the batch exceeds the value of the max_message_size parameter for the send queue or the value of the MQ maximum message length (MAXMSGL) for the send queue.

Notes:
  • To use a value greater than 1 for trans_batch_sz, the value of lob_send_option must be I.
  • When trans_batch_sz is greater than 1, Q Capture sends several transactions as one to Q Apply but sends the transaction information (SRC_TRANS_ID, AUTHID, AUTHTOKEN, and PLANNAME) for only the last transaction in the batch unless you specify trans_batch_info=y. As a result, if an exception or error occurs while applying a row to the target, the transaction information for the error in the IBMQREP_EXCEPTIONS table might not be reliable.
  • Using this parameter in combination with other parameters or consistent-change data (CCD) targets might product undesired results. For details, see trans_batch_sz parameter.

trans_commit_mode (z/OS)

Default: trans_commit_mode=1

Methods of changing: When Q Capture starts

The trans_commit_mode parameter specifies which commit log record the Q Capture uses to determine when a transaction has completed and can be published.

Commit processing by the Db2 transaction manager might consist of several phases. In very simple terms, phase 1 writes the commit log record to the log, and phase 2 releases any locks. In data sharing, committed data might not be visible from all members until phase 2 of the commit process is completed. For example, if an application inserts data on member 1, and you are issuing a SELECT statement for that data from member 2, the data might not be visible when phase 1 of the commit process has completed.

By default ( trans_commit_mode=1), Q Capture publishes transactions after phase 1. You can specify trans_commit_mode=2 to instruct Q Capture to wait until it reads and processes the log record that signals the end of phase 2 before it sends the transaction. This setting might be required in the following situations:

  • An ALTER operation was performed on replicated tables in a different data-sharing member than where Q Capture is running. In this situation, the SYSIBM.SYSCOLUMNS updates that Q Capture uses to replicate the ALTER might not be visible from all members until the end of phase 2 of the commit process.
  • When processing a transaction that contains large object (LOB) or XML data, after phase 1 of the commit process the LOB or XML data might not be visible from the member where Q Capture is running. In this case, Q Capture would replicate a NULL value for the XML or LOB data in that row. By waiting until the end of phase 2, the LOB or XML data is guaranteed to be visible from all members.
  • For bidirectional or multi-unidirectional replication, if you are updating in both directions, trans_commit_mode=2 might be needed to ensure that modified data is visible from the member where Q Capture is running.
Note: Using trans_commit_mode=2 can potentially result in longer latency for a transaction. It is advisable to test this setting before using it in production environments.

USE_STREAM_TRANS (Linux, AIX, and Windows only)

Syntax: USE_STREAM_TRANS=Y

Default: N.

By default, Q Capture waits for a transaction to be committed before propagating it. A streamed transaction is a transaction that is applied before it is known if it will be committed or rolled back. Only transactions that contain at least one multi-row SQL operation (such as insert from an external table) on a column-organized table can be streamed. Transactions that contain only changes to row-organized tables are not streamed.

If USE_STREAM_TRAN=Y, Q Capture streams eligible transactions. Streaming provides better performance for very large transactions on column-organized tables. With streaming, these transactions can be run concurrently at the source and target databases.

warnlogapi (z/OS)

Default: warnlogapi=0

Methods of changing: When Q Capture starts; IBMQREP_CAPPARMS table

The warnlogapi parameter specifies the number of milliseconds that the Q Capture program waits for the Db2 for z/OS instrumentation facility interface (IFI) or Db2 for Linux, UNIX, and Windows log read API to return log records before Q Capture prints a warning to the standard output.

warntxsz

Default: warntxsz=0 MB

Methods of changing: When Q Capture starts

You can use the warntxsz parameter to detect very large transactions. This parameter prompts the Q Capture program to issue a warning message when it encounters transactions that are larger than a specified size. You provide a threshold value in megabytes, and transactions that exceed the threshold prompt a warning message. Q Capture issues multiple warning messages if the transaction size is a multiple of the warntxsz value. For example, if you set warntxsz to 10 MB and Q Capture encounters a 30 MB transaction, three warnings are issued (one for 10 MB, one for 20 MB, and one for 30 MB).

Q Capture issues the ASN0664W message when the size of a transaction exceeds the warntxsz value. The default value of 0 MB means warnings are never issued.

Note: Q Capture does not issue these warnings for transactions that were spilled to a file after exceeding the MEMORY_LIMIT threshold.