Configuring Optim High Performance Unload

Use the db2hpu.cfg file in the cfg directory to configure Optim™ High Performance Unload. The settings in the db2hpu.cfg allow you to override the built-in defaults for Optim High Performance Unload and change them to suit your needs.

A typical db2hpu.cfg file is shown in the following example:
# HPU default configuration
bufsize=4194304
db2dbdft=SAMPLE
db2instance=db2inst2
netservice=db2hpudm42
doubledelim=binary
The lines that begin with a pound symbol (#) are comments. Each parameter is defined on one line in the format:
keyword=value
Spaces are also supported in the configuration file, so you can include spaces around the equal sign (=).
For configuration file parameters that support a list of values, the syntax is different for different platforms. On UNIX and Linux systems, the format is:
keyword=value:subvalue, value:subvalue
On Windows systems, the format is:
keyword=value;subvalue, value;subvalue

Configuration file parameters

Use the following parameters to configure the defaults for your Optim High Performance Unload installation:
allow_unlimited_memory
If you want to allow Optim High Performance Unload to ignore ulimit values that limit memory consumption for the associated shell, set this parameter to yes. You can use this parameter if an unload fails because of memory limit restrictions. The default value is no. allow_unlimited_memory is a prerequisite to ignore efficiently the memory limit, by the setting of memory_limit to no.
Restriction: This parameter applies to Linux and UNIX platforms only and can only be specified in the master db2hpu.cfg file and not in any user-specified configuration files.
Start of changeblu_parallelismEnd of change
Start of changeUse this parameter to determine the number of columns of a column-organized table to be processed in parallel. By default, when unloading several columns of a column-organized table, parallelism occurs on their processing with a number of columns processed in parallel determined automatically. This parameter allows you to set a value of your choice to this number. The value assigned to it must be a numeric one. Here is an example of its setting:
...
blu_parallelism=10
...
End of change
Start of changeSee Behaving on the performances when unloading a column-organized table for more information.End of change
bufsize
Use this parameter to define the default buffer size that is used when Optim High Performance Unload generates the output file. The value is the actual number of bytes that are used for this buffer. The minimum accepted value is 262144 (256 kilobytes) and the maximum accepted value is 2097152 (2 MB). The lower value will use less memory during processing, but will decrease I/O efficiency. The reason the maximum value is limited to 2 MB is that a value larger than 2 MB would use more memory, but would probably not cause a noticeable gain in I/O efficiency. If the output file is on the same physical disk as the file Optim High Performance Unload will read (container or backup), then a larger value might improve the performance of Optim High Performance Unload.
For example, if you have 4 available processors and you want to unload TABLE1, TABLE2, and TABLE3, with the option bufsize=262144, Optim High Performance Unload processes each table consecutively with 4 processing threads and writing buffers of 256 KB each.
db2api_monitoring
For Linux and UNIX platforms only. The usage of the Optim High Performance Unload FLUSH BUFFERPOOLS and LOCK options involves calling Db2® APIs which will acquire Db2 locks. The db2api_monitoring configuration variable can be used to set a time limit upon which Optim High Performance Unload will generate informational messages to indicate that its processing is waiting on Db2 locks before it can proceed. The locks acquired when using these Optim High Performance Unload options do not respect any lock monitoring condition. This monitoring configuration is made by setting the value of the db2api_monitoring parameter of the db2hpu.cfg file to a number of seconds. This parameter default value is 0. If no monitoring is configured, or the parameter value is set to 0, the monitoring mechanism on the Db2 APIs for locking and flushing of bufferpools operations is disabled. Here is an example using the parameter:
...
db2api_monitoring=120
...
When the mechanism is enabled each time the monitoring value is reached, an information message is sent, that specifies the operation for which the monitoring occurred. The operations are identified by one of the following strings:
  • LOCK: This operation is performed before the data unload, to block updates against the table concerned during the Optim High Performance Unload process (when the LOCK YES option is set).
  • LOCK RESET: This operation is performed after the data unload, to release the lock of the table (when the LOCK YES option is set).
  • FLUSH BUFFERPOOLS: This operation is performed before the data unload, to register and flush on disk the data in buffer-pools (when the FLUSH BUFFERPOOLS YES option is set).
  • FLUSH BUFFERPOOLS RESET: This operation is performed after the data unload, to commit the registration of data in buffer-pools (when the FLUSH BUFFERPOOLS YES option is set).
For example: This Optim High Performance Unload execution report, illustrates a situation where the monitoring time is reached when performing a LOCK operation:
INZM031I Optim High Performance Unload for Db2 06.01.00.001(120531) 
         64 bits 06/04/2012 (Linux lat194 x86_64)
INZI473I Memory limitations: 'unlimited' for virtual memory and 'unlimited' for data segment
       ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8----+----9----+
000001 GLOBAL CONNECT TO SAMPLE; 
000002 UNLOAD TABLESPACE 
000003 PART(ALL) 
000004 SELECT * FROM "EMPLOYEE"; 
000005 OUTFILE("outfile") 
000006 ; 

INZU462I HPU control step start: 16:13:09.580.
INZU463I HPU control step end : 16:13:09.591. 
INZU464I HPU run step start : 16:13:09.603.
INZU595I The 'LOCK' Db2 API call is still in progress (partition 0 of 'I910.EMPLOYEE' table)
INZU595I The 'LOCK' Db2 API call is still in progress (partition 0 of 'I910.EMPLOYEE' table)
INZU595I The 'LOCK' Db2 API call is still in progress (partition 0 of 'I910.EMPLOYEE' table) 
INZU410I HPU utility has unloaded 42 rows on lat194 host for I910.EMPLOYEE in outfile. 
INZU465I HPU run step end : 16:13:21.699. 
INZI441I HPU successfully ended: Real time -> 0m12.118807s 
User time -> 0m3.214510s : Parent -> 0m0.032994s, Children -> 0m3.181516s 
Syst time -> 0m3.529462s : Parent -> 0m0.013997s, Children -> 0m3.515465s
In this report the monitoring value was reached three times when performing the lock operation. Depending on the monitoring value that was set, then you can conclude that this value is badly chosen (set to a too small value for the duration of this operation), or something unexpected is happening with the duration of this operation. The execution completed successfully, having unloaded the data. If the Optim High Performance Unload lock operation had not completed at all, the information message would have continued to be generated until either Optim High Performance Unload was manually terminated or until Db2 released the locks that were preventing Optim High Performance Unload from progressing.
db2api_timeout
For Linux and UNIX platforms only. Use this parameter and the db2api_monitoring parameter to terminate the execution of a Optim High Performance Unload operation that is still running when the timing value of the db2api_monitoring parameter is reached. The possible values of the db2api_timeout parameter are yes or no. The default value is no. If there is no value set for the db2api_monitoring parameter, the setting of the db2api_timeout parameter to yes will be ignored. Here is an example using the parameter:
...
db2api_monitoring=120
...
db2api_timeout=yes
...
When the execution is terminated, an error message is sent. The message specifies the operation for which the timeout occurred. The list of the operations monitored is in the description of the db2api_monitoring parameter.
For example: This Optim High Performance Unload execution report illustrates a situation where the monitoring time is reached when performing a LOCK operation, resulting in an execution time out:
INZM031I Optim High Performance Unload for Db2 06.01.00.001(130412) 
         64 bits 04/16/2013 (Linux lat186 3.1.0-1.2-desktop (187dde0) x86_64)
INZI473I Memory limitations: 'unlimited' for virtual memory and 'unlimited' for data segment
       ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8----+----9----+
000001 GLOBAL CONNECT TO SAMPLE;
000002 UNLOAD TABLESPACE
000003 LOCK YES
000004 FLUSH BUFFERPOOLS NO
000005 SELECT * FROM EMPLOYEE;
000006 OUTFILE("outfile")
000007 ;

INZU462I HPU control step start: 15:29:59.597.
INZU463I HPU control step end  : 15:29:59.912.
INZU464I HPU run step start    : 15:30:00.027.
INZU624E A time out occurred for 'LOCK' Db2 API call (partition 0 of 'I958.EMPLOYEE' table).
INZU465I HPU run step end      : 15:30:02.364.
INZU366I HPU return code 8 (reason code 0x123a016)
In this report, the monitoring value was reached when performing the lock operation. The Optim High Performance Unload execution was terminated and ended in error.
db2compr_api
Use this parameter to specify the default compression library name to be loaded when unloading from a compressed backup image.
db2dbdft
This parameter corresponds to the database name that is used by Optim High Performance Unload when no database name is given (command-line option –d).
db2instance
Use this parameter to specify the instance name that is used by Optim High Performance Unload when no instance name is given (command-line option –i).
db2promote
Use this parameter to specify whether or not users who have the Db2 authority to SELECT from a table but do not have the authority to do a QUIESCE SHARE on that table should be given the authority for the purpose of running an unload.

Start of changeThe QUIESCE is required to unload the data in a consistent state. If db2promote is set to no, the default value of the LOCK parameter is set to NO. This default value is set to YES instead. Users who have the authority to use both SELECT and QUIESCE will not be affected by the db2promote option.End of change

Start of changedb2variablesEnd of change
Start of changeUse this parameter to specify the use of a file for setting the values of Db2 variables. It must be specified with an absolute path of a file containing a list of Db2 variables names with their values.End of change
Start of changeHere is an example of its setting:
...
db2variables=/home/i1111/mydb2vars.txt
...
End of change
dir_cfg
When multiple machines are involved in a Db2 instance, and the Optim High Performance Unload configuration is the same on all these machines, use this parameter to share the same Optim High Performance Unload configuration files. The Optim High Performance Unload configuration files that can be shared are:
  • db2hpu.cfg
  • db2hpu.locale
  • db2hpu.map
  • db2hpu.debug
  • db2hpu.trace
  • remote.locale
In order to share those files:
- copy all the configuration files of one machine in a shared location.
- update the db2hpu.cfg configuration file in the cfg directory of each machine, by deleting all the content and adding only the dir_cfg parameter in it.
- set the parameter to the absolute path of the directory where all the shared configuration files are located.
Note: For each machine, the shared configuration will be used if the db2hpu.cfg configuration file contains only the dir_cfg parameter with the shared location. If the dir_cfg parameter is not the only parameter or there is no dir_cfg parameter, the shared configuration will be ignored, and the Optim High Performance Unload configuration files on this machine will be used.
Important: The parameter value must be specified as an absolute path of the directory where the shared configuration files are located. For example, if /sharedfs is the shared configuration files location, you must specify dir_cfg=/sharedfs in each of the local configuration files.
doubledelim
Use this parameter to specify the default for the DOUBLE DELIM option in the control file. This parameter applies only to DEL or DELIMITED output formats. If it is ON, the character type columns are scanned for occurrences of the delimiter, and each discovered occurrence is doubled. If BINARY, then only character columns that have FOR BIT DATA are scanned. If OFF, then no scanning occurs. The default setting is BINARY.

You can achieve the best performance by performing as little scanning as possible. You might be able to avoid scanning character columns by selecting a character to use as a delimiter that you know is not used in the unloaded data. If you cannot use a character that is not used in unloaded data, then set the DOUBLE DELIM option to ON in the control files where necessary. If you anticipate that your users will not set the DOUBLE DELIM option to ON where required, you can set it in the configuration file. The user can override whichever setting is made here in their individual control files.

Start of changegraphic_even_paddingEnd of change
Start of changeThis parameter allows to behave on the way the values of the GRAPHIC, VARGRAPHIC, LONG VARGRAPHIC and DBCLOB columns having an odd size are handled into an ASC output file. By default, these values are padded with a null byte so that the associated area into the output file has an even size. If one does not want to get such a behavior applying, it is possible to disable the padding to an even size of these values, by setting this parameter to no. The default value is yes.End of change
Start of changeHere is an example of its setting:
...
graphic_even_padding=no
...
End of change
hidden
To specify that you want to include hidden columns in the unload, set this option to yes. The default value is no.
ixftrunc
Use this parameter to save space when unloading VARCHAR or VARGRAPHIC data in IXF format. Only the columns of these two types with size greater than the value given to this parameter are considered. With this parameter, each value unloaded for these types of columns will have the size according to the number of the real characters in the data instead of having the maximum size set for the column. The default value is 20.
keepalive_time
The sockets created by Optim High Performance Unload for its network communications may stay inactive for a long period of time. In particular, this can happen for a socket created during automatic data migration.
During automatic data migration, a socket is left inactive during the target Load of table data. It may stay inactive for a long period depending upon how long the Load step may take.
Some systems are configured so that sockets which stay inactive too long are automatically disconnected by the system itself. For example, such a situation may be encountered when a firewall is involved in the network configuration. As a result of an unexpected disconnection of one of its sockets, an Optim High Performance Unload task cannot run to completion successfully.
In order to avoid such a situation, Optim High Performance Unload can create its sockets which specific keepalive characteristics. When a socket is created this way, empty packets are regularly sent through the TPC/IP layer itself in order to ensure pseudo-activity for it. The Optim High Performance Unload socket time interval between empty packets can be tuned independently from any generic operating system socket setting.
This parameter enables the Optim High Performance Unload keepalive mechanism.
There are three ways this parameter can be set:
  • with the value -1: disables the keepalive tuning, so that the Optim High Performance Unload sockets are created without the ability of artificially maintaining an activity on them.
  • with the value 0 (default value): enables the keepalive tuning so that the Optim High Performance Unload sockets are created with the ability of artificially maintaining an activity on them. The time interval between the activation packets will respect the value set at the operating system level.
  • with a positive numeric value: enables the keepalive tuning for Optim High Performance Unload sockets. The time interval between the activation packets is set to the configured value in minutes.
As an example, the following setting will enable socket activation packets every 10 minutes:
...
keepalive_time=10
...
Start of changekeystore_fileEnd of change
Start of changeWhen running Optim High Performance Unload in standalone mode against an encrypted backup, use this parameter to specify the absolute path of the Db2 keystore file to be considered. Depending on the keystore type (PKCS#12, KMIP or PKCS#11), the file in question is a PKCS#12 file containing the encryption Master Key to be used, or a text file containing the associated KMIP server configuration, or a text file containing the associated PKCS#11 configuration. This parameter is mandatory when running Optim High Performance Unload in standalone mode because in such a case, the information relative to the keystore contained into the backup might not be valid for the machine where Optim High Performance Unload is executed.End of change
Start of changekeystore_typeEnd of change
Start of changeWhen running Optim High Performance Unload in standalone mode against an encrypted backup, use this parameter to specify the type of the Db2 keystore file to be considered. Depending on the keystore type (PKCS#12, KMIP or PKCS#11), it can be set with the values 'pkcs12' or 'kmip' or 'pkcs11'. This parameter is mandatory when running Optim High Performance Unload in standalone mode because in such a case, the information relative to the keystore contained into the backup might not be valid for the machine where Optim High Performance Unload is executed.End of change
lobinlinesize
This parameter allows to set the truncation size applied to the inlined LOB data. The parameter must be set in numerical value and the unit is the kilobyte.
Important: Any setting of this truncation size is ignored when the Optim High Performance Unload task is launched with the IXF output format involved. In such case, the default value of 32700 bytes is kept for the truncation of the LOB data.
Restriction: The value specified for the LOB data truncation size when it is inlined cannot be set to a value lower than 32Kb, or greater than 1024Kb.
Here is an example of this parameter setting, with a value corresponding to a 64Kb truncation size:
...
lobinlinesize=64
...
maxmemory
Use this parameter to specify the upper limit of system memory in bytes that Optim High Performance Unload can use. If the specified value is too low to proceed with the unload, Optim High Performance Unload will use the minimal requirements.
maxthreads
Use this parameter to specify the maximum number of processing threads that Optim High Performance Unload can use. You can specify this parameter to limit the processing usage for unloads that involve small tables. Setting a lower value for this parameter will allow Optim High Performance Unload to unload several tables at the same time using less processing threads. The minimum value for this parameter is 1 and the maximum value is equal to the number of processors.
For example, if you have 4 available processors and you want to unload TABLE1, TABLE2, and TABLE3, with the option maxthreads=1, Optim High Performance Unload processes all the three tables at the same time with one processing thread for each.
maxselects
When unloading from a backup image, this parameter allows to limit the number of tables unloaded in parallel. By default, all of the tables specified in the same UNLOAD block are processed in parallel. When there are a lot of tables concerned, this can lead to a memory exhaustion. Limiting the number of tables processed in parallel will reduce the memory consumption, but will increase the time it takes for the unload because the backup image concerned is read at least one time per set of tables unloaded. So, the best value for this parameter is the maximum number of tables that can be processed in parallel without exhausting the amount of memory available.
memory_limit
To specify that Optim High Performance Unload can use as much memory as needed to complete a given task, set memory_limit to no. You can only set memory_limit if you have already set the allow_unlimited_memory configuration file parameter to yes.
You can specify the memory_limit parameter at the master configuration file level, in the user-specified configuration files, or at the command-line level.
Start of changemig_pipe_timeoutEnd of change
Start of changeUse this parameter to specify the timeout value to be considered for the opening of pipes by the reading component of an automatic migration. The value assigned to it must be a numeric one, which corresponds to a number of seconds. It must be set into the configuration file located on the machine where Optim High Performance Unload is invoked for the data migration to be performed.End of change
Start of changeAn automatic migration has two internal components collaborating together: one for the reading of the data to be migrated, and one for the loading of this data into the destination considered. The data migrated is communicated between the two components through the usage of streams which can be files or pipes: the reading component writes the data into these streams, which are read by the loading component. If pipes are used, they cannot be opened by the reading component as long as the loading one has not opened the pipes in question. A latency can occur at the beginning of the loading component before it does open the streams to be consumed, particularly when the migration is performed towards a Db2 destination, for which the loading component invokes the Db2 Load utility under the covers. There is a timeout applying to the opening of the pipes by the reading component, which is by default of 10 seconds. If a latency longer than this value occurs at the beginning of the loading component, the reading component gives up opening the pipes, then it stops its processing and the automatic migration does not succeed. In this case, it might be necessary to configure a convenient value for this timeout.End of change
Start of changeAs an example, the following setting would allow to configure the timeout value for the pipes opening within an automatic migration to 180 seconds:
...
mig_pipe_timeout=180
...
End of change
min_extent_per_thread
Use this parameter to specify the minimum number of data page extents (linked with the number of used pages for a table) to start another processing thread. This parameter is used only if the use_stats parameter is set to yes. The default value is 6.
nbcpu
Use this parameter to limit the number of work units that are started. This value has an effect on both the memory usage and the degree of parallelism for Optim High Performance Unload. Optim High Performance Unload uses this value as the upper limit for the processing parallelization. The maximum value of this parameter is the number of processors on the system, which is the default setting. The minimum value is 1.
For example, if you have 4 available processors and you want to unload TABLE1, TABLE2, and TABLE3, with the option nbcpu=2, Optim High Performance Unload processes each table consecutively with 2 processing threads.
netservice
Use this parameter to specify the service name associated with the Optim High Performance Unload network feature in the services system file. The Optim High Performance Unload installation sets this parameter automatically.
nettohosts
Attention: This parameter is now deprecated. Create a db2hpu.map configuration file in the cfg subdirectory of your main Optim High Performance Unload installation directory to specify the netnames and hostnames associations.
Use this parameter when netnames, as opposed to hostnames, are specified in the second column of the db2nodes.cfg file. Optim High Performance Unload can identify only machines with hostnames. The second column of the db2nodes.cfg file is used to determine whether a database partition is local or remote. Optim High Performance Unload uses this parameter to determine which hostname is related to a given netname. The formatting rules for this parameter follow the formatting rules for lists. If you define this parameter, the format is: nettohosts=host1sw:host1, host2sw:host2...
odpp_path
odpp_version
When using Optim High Performance Unload for an ODPP masking processing, if the PATH and VERSION options of the DATAMASKING clause are omitted in the associated control file, Optim High Performance Unload tries to get the expected ODPP path and version from the db2hpu.cfg configuration file. Their setting can be made at this location through the usage of two dedicated parameters:
  • odpp_path (in place of the PATH option in the DATAMASKING control file clause).
  • odpp_version (in place of the VERSION option in the DATAMASKING control file clause).
Example of their specification:
...
odpp_path=/opt/odpp
odpp_version=9.1
...
Setting these two parameters is the preferred way to specify the data masking method at the configuration file level, compared to setting it with the four following parameters.
For more information: DATAMASKING
odpp_api_loader
odpp_api_parser
odpp_api_adapter
odpp_api_provider
When using Optim High Performance Unload for an ODPP masking processing, if the LOAD option of the DATAMASKING clause is omitted in the associated control file, Optim High Performance Unload tries to get the expected libraries names from the db2hpu.cfg configuration file. Their setting can be made at this location through the usage of four dedicated parameters:
  • odpp_api_loader (in place of the LOADER library of the LOAD option in the DATAMASKING control file clause).
  • odpp_api_parser (in place of the PARSER library of the LOAD option in the DATAMASKING control file clause).
  • odpp_api_adapter (in place of the ADAPTER library of the LOAD option in the DATAMASKING control file clause).
  • odpp_api_provider (in place of the PROVIDER library of the LOAD option in the DATAMASKING control file clause).
Example of their specification:
...
odpp_api_loader=/opt/odpp/bin/libODPPLoader.so.9.1
odpp_api_parser=/opt/odpp/bin/libODPPParser.so.9.1
odpp_api_adapter=/opt/odpp/bin/libODPPAdapter.so.9.1
odpp_api_provider=/opt/odpp/bin/libODPPProvider.so.9.1
...

The values of these parameters must be absolute file names relative to the ODPP libraries concerned. Each ODPP library needed must be set, either at the configuration file level or at the control file level. When setting the ODPP libraries, the names of existing ODPP libraries must be specified in order for them to be loaded as expected. The ODPP library name set for each presented parameter must be the appropriate one, in order to find the respective ODPP APIs searched.

For more information DATAMASKING
Start of changeopenssl_api_cryptoEnd of change
Start of changeUse this parameter to specify the name of the OpenSSL crypto library, when its effective name differs from its default one.End of change
Start of changeopenssl_api_sslEnd of change
Start of changeUse this parameter to specify the name of the OpenSSL ssl library, when its effective name differs from its default one.End of change
shared_datapart_processing
The possible values are yes and no. The default value is no. The setting of this parameter will apply to any Optim High Performance Unload task. To override this setting for a specific task, a different setting can be specified with the SHARED_DATAPART_PROCESSING clause in the control file.
Use this parameter, for a data-partitioned table within a DPF instance, to enable a parallel processing of distinct database partitions and improve the performances. It is useful especially in the case of tasks running on backups that are managed by a storage manager, in order to allow a parallel reading of the various database partitions backups. The default behavior is a serialized processing which can lead to poor performances when the backups concerned are taken with a storage manager. To specify this parameter, the format is:
...
shared_datapart_processing=yes
...
For more information Configuration related to the processing of data-partitions
Start of changestacksizeEnd of change
Start of changeUse this parameter to tune the maximum size for the stack related to an Optim High Performance Unload process. The value assigned to it must be a numeric one, which unit is the Kb. It might be necessary to increase the size of the stack for an Optim High Performance Unload execution, in particular when the default size for this stack is not large enough in order to perform successfully a task based on a deep recursive SQL statement. For example, an SQL statement having a bunch of ORed predicates in its WHERE clause. If this parameter is not set, the stack size is inherited from the environment, through its 'ulimit' value associated to the stack size. Changing this environment value would allow to have a different value applying to the stack size of Optim High Performance Unload executions, but it would behave on any command launched from this updated environment. Using this parameter allows to increase the stack size applying to the Optim High Performance Unload tasks only, without interfering on any separate application.End of change
Start of changeAs an example, the following setting would allow to set the stack size for Optim High Performance Unload tasks to 256Mb:
...
stacksize=262144
...
End of change
Start of change
Attention: The stacksize parameter is available on Linux and Unix platforms only.
End of change
stagedir
Use this parameter to specify the directory in which you want Optim High Performance Unload to generate temporary files when you unload data from backup images. By default, the system's temporary directory is used.
stage_per_part
Use this parameter to define several locations for backup staging, one location per database partition. Without this option all the staging files will be created in a unique directory, thus on a unique file system. You can use this option to ensure that enough space is available and that space is not limited to one directory and one file system for the staging area. This parameter value can be yes or no, default value is no.

When this parameter is set to yes in the db2hpu.cfg file, the staging files will be distributed per database partition in location named as follows: DBPARTnnn where nnn is a 3-digit number that matches the associated database partition number. The root location for these directories is the directory that is defined as the staging area: either the directory that is defined through the stagedir configuration parameter, or the /tmp directory by default. These locations must exist; if not an error message displays.

When the per-database partition staging feature is enabled, it is systematically applied to the staging space that is related to the processing of the backups of database partitions that contain data during the run phase. If the catalog that is processed during the control phase is retrieved from a catalog database partition backup, this feature is applied only if the CATN option of the USING BACKUP CATALOG clause is specified. In this case, the potential staging files for the catalog will be created in a sub-location that corresponds to the database partition number that is specified to the CATN option. If this option is not specified, the potential staging files for the catalog will be created in the staging root directory directly.
Attention: To enlarge the staging space, the per-database partition locations must be created as symbolic links towards physical paths landing on distinct file systems. If these locations are created as physical directories in the same root location, the staging space will not be increased.

For example : Suppose that you have the following configuration for an unload operation from backups of a Db2 instance that has three database partitions, #1, #2, #3:

...

stagedir=/staging

stage_per_part=yes

...

In this case, the following paths must be created before you start any unload operation:

/staging/DBPART001

/staging/DBPART002

/staging/DBPART003

If one of these paths does not exist, an unload operation that needs to create a staging file will fail and an error message will display.

Start of changestoreprocedure_reportEnd of change
Start of changeUse this parameter to specify the file where the execution report of an Optim High Performance Unload task performed through the invocation of the Optim High Performance Unload stored procedure is written. When the Optim High Performance Unload stored procedure is executed, the report associated to the underlying Optim High Performance Unload execution is handled through the stored procedure output parameter called STDERR. This parameter being of CLOB data type, its display is limited to 8192 bytes by the Db2 CLP. In consequence, if the execution report is larger than this limit, it is displayed truncated so that one cannot get an evidence of the execution result.End of change
Start of changeThis parameter is the mean to configure Optim High Performance Unload so that the stored procedure execution write the entire report into a file, instead of displaying it through an output parameter. It must be set with an absolute file name specification.End of change
Start of changesyncsizeEnd of change
Start of changeUse this parameter to enable the output files synchronization to the disk. When the operating system is configured so that the disk caching might use a huge amount of memory, generating large output files might lead to an important memory consumption, affecting the usability of the machine by separate applications. In order to limit the amount of memory consumed by the disk caching when executing an unload task, this parameter allows to enable the ability of regularly synchronizing the output files to the disk.End of change
Start of changeThe value assigned to this parameter must be a numeric one, which unit is the Kb. When setting this parameter, each time the size of an output file has grown of the amount of data configured, the file is synchronized to the disk.End of change
Start of changeHere is an example of its setting:
...
syncsize=10000
...
End of change
tsm_api
Use this parameter to specify the API library name that is to be loaded dynamically when you unload data from backup images that are stored in Tivoli® Storage Manager.
umask
Use this parameter to override the umask on a remote system and generate files with appropriate permissions. By default, the umask value for remote unloads is the umask value of the xinetd (or inetd) daemon starter. The generated files permissions are then restricted by the root umask. When you perform an automatic system migration, this restriction can be problematic because the Db2 Load needs data files to be readable by the instance user of the target database. In some cases (depending on the system configuration), the applied umask is too restrictive and generated files are not visible by the target database instance user. You can set the Optim High Performance Unload umask option to allow the program to override this restrictive umask.
The umask feature definition is based on the UNIX umask definition: three octal digits are used to define permissions for user/group/other masks. The umask value can take octal values between 000 and 777. The recommended umask value for Optim High Performance Unload is 022. Output files will be generated with appropriate permissions to perform manual or automatic migration.
Restriction:
  • The umask parameter applies to remotely generated files only.
  • The umask definition in Optim High Performance Unload does not allow you to generate data files with every kind of access rights. Optim High Performance Unload will always query data files generation with 644 access rights (rw-r--r--) . If the umask is more restrictive, the access rights will be reduced. If the umask is less restrictive, the access rights will remain rw-r--r--.
use_netname
Use this parameter to enable or disable the usage of netnames when they are used in the db2nodes.cfg file of the Db2 instance concerned. Such netnames generally correspond to specialized and high-speed network interfaces. You can choose to handle the Optim High Performance Unload network communications through these interfaces or not. By default, this parameter value is internally set to yes, and the usage of netnames is then enabled by default.
use_stats
Use this parameter to specify if you want Optim High Performance Unload to consider table statistics from the catalog. This information lets Optim High Performance Unload determine the optimal number of processing threads to use when unloading the tables.

The possible values for this parameter are yes or no. If you specify yes, Optim High Performance Unload can optimize the processing by unloading larger tables in parallel. This option can be effective only if the statistics in the tables are current.

Attention: The use of the yes option can significantly reduce Optim High Performance Unload performance under some circumstances. For example, if a particular table has outdated statistics information that shows that the table has little or no data even though the table has been since updated to include several million rows, Optim High Performance Unload will still consider the table small and will use only one processing thread to unload it.

The default value is no.

xbsa_api
Use this parameter to specify the API library name to be loaded dynamically when you unload data from a backup. This parameter is considered only when you specify the USE XBSA option when unloading from backup images. The USE XBSA option means that the backup images are managed by an XBSA-like storage tool.