Command-line options

You can use the following command-line options with the Optim™ High Performance Unload command.

-a | --application
Use this option to set the name of the client application connecting to Db2®. The client application name associated to a Db2 connection is by default set to the value db2hpu. This default value can be overridden for a given Optim High Performance Unload task by setting another one through this command line option.
Attention: The client application name can also be set in the control file by using the CLIENT APPLICATION NAME option. If a value is set at both two levels, the one specified in the command line is taken into account.
Syntax
-a | --application application_name
Variable
application_name

Specify a name to be used as the client application name for the Db2 connection established by Optim High Performance Unload.

Default
db2hpu
--binary-numerics
Use this option to unload numeric data (integer and real data) in binary format. The binary representation used will always be big-endian, the one expected by Db2 Load. By default, Optim High Performance Unload does not use binary format for numeric data. This option can only be used in combination with the --format option with the asc parameter. You cannot specify the --binary-numerics command-line option if you use a control file to run the unload.
Syntax
--binary-numerics
Variable
None.
Default
None.
-c | --catn
The value specified to this option identifies the database partition number that contains the catalog in a multi-partitioned database. The parameter is considered only when unloading from a backup image. You can unload from a backup image only if you use a control file, so the -c option works only together with the -f option, and only if you specify the USING BACKUP CATALOG clause in the GLOBAL block of the control file.
Attention: If you specify the CATN option in the control file, it overrides the --catn command-line option.
Syntax
-c | --catn catalog_database_partition_number
Variable
catalog_database_partition_number
Default
If the CATN option is not set in the control file, the database partition 0 is assumed.
--check-consistency
This option allows to verify if all the Optim High Performance Unload versions installed on the machines to be considered for a given execution are the same ones. In order to get this verification performed, the original Optim High Performance Unload task launched gathers the product version installed from all the remote machines induced by the control file, and then compares them to its own version.

Let say that one has an Optim High Performance Unload usage scenario involving 10 machines, with the same version of Optim High Performance Unload installed on all these machines, and that he wants to upgrade Optim High Performance Unload to a more recent code level. Let imagine that, by mistake, this upgrade would be performed on 9 out of these 10 machines only. At this point, if Optim High Performance Unload is launched for executing this scenario, the execution report might show daemon errors because the various Optim High Performance Unload code levels involved would not be compatible ones for the protocol initiating the communications between the Optim High Performance Unload tasks running on all the machines concerned. Then, Optim High Performance Unload can be launched with this command line option for identifying a potential versions discrepancy.

Using this command line option can work only if the Optim High Performance Unload versions installed on all the machines to be considered have a high enough code level for understanding it. If not, the behavior observed when using it would be unpredictable.

Syntax
--check-consistency
Variable
None.
Default
None.
--credentials
Use this option to specify credentials at a user level. The credentials information is stored in the db2hpu.creds configuration file. See Defining user credentials for more information.
You can specify one of the following credentials types:
remote
Specify this value to create credentials that will be used for tasks that are performed on a database of a remote Db2 node that is cataloged locally with a Db2 client. To establish a connection, a user name and a password must be specified in the db2hpu.creds configuration file.
local
Specify this value to create credentials that will be used for tasks that are performed on a database of a local Db2 instance. Use this option if you want to connect to the database by using credentials of another user. To establish a connection, a user name and a password must be specified in the db2hpu.creds configuration file.
tsm
Specify this value to create credentials that will be used for tasks that are performed on backup images that have been created by IBM® Tivoli® Storage Manager. Use this option if you need to connect to IBM Tivoli Storage Manager that is configured in such a way that an explicit authentication is needed. To establish a connection, a user name and a password must be specified in the db2hpu.creds configuration file.
keystore
Specify this value to create credentials that will be used for tasks dealing with encrypted databases or backups, which need to use an encryption master key. It might be necessary to do so if the associated keystore is a local PKCS#12 one, or a PKCS#11 one on an HSM material.
If the encryption environment is based on a local PKCS#12 keystore, the access to this file is internally performed through a call of the IBM GSKit tool. The keystore file access is protected by a password. Either this password is stashed into a separate file associated to the keystore, so that utilities accessing to the keystore file can benefit from this password without any further configuration. Or this password is not stashed and any utility having to access to the keystore file needs that the password be explicitly passed to it. Having keystore credentials defined is necessary for Optim High Performance Unload when the keystore file password is not stashed.
If the encryption environment is based on a PKCS#11 keystore on an HSM material, having keystore credentials defined is necessary for Optim High Performance Unload, even if the password protecting the keystore access is already stashed into a separate file.
cloudant
Specify this value to create credentials that will be used for automatic migration tasks that are performed against a Cloudant destination. To establish a connection, a password must be specified in the db2hpu.creds configuration file. See Optim High Performance Unload configuration for Big Data, Db2, Object Storage or PostgreSQL destinations for more information.
couchdb
Specify this value to create credentials that will be used for automatic migration tasks that are performed against a CouchDB destination. To establish a connection, a password must be specified in the db2hpu.creds configuration file. See Optim High Performance Unload configuration for Big Data, Db2, Object Storage or PostgreSQL destinations for more information.
mongodb
Specify this value to create credentials that will be used for automatic migration tasks that are performed against a MongoDB destination. To establish a connection, a password must be specified in the db2hpu.creds configuration file. See Optim High Performance Unload configuration for Big Data, Db2, Object Storage or PostgreSQL destinations for more information.
warehouse
Specify this value to create credentials that will be used for automatic migration tasks that are performed against a Db2 Warehouse destination, or for tasks unloading data from a Db2 Warehouse source. To establish a connection, a password must be specified in the db2hpu.creds configuration file. See Optim High Performance Unload configuration for Big Data, Db2, Object Storage or PostgreSQL destinations for more information about the automatic migration tasks.
swift
Specify this value to create credentials that will be used for automatic migration tasks that are performed against a Swift destination. To establish a connection, a password must be specified in the db2hpu.creds configuration file. See Optim High Performance Unload configuration for Big Data, Db2, Object Storage or PostgreSQL destinations for more information.
postgresql
Specify this value to create credentials that will be used for automatic migration tasks that are performed against a PostgreSQL destination. To establish a connection, a password must be specified in the db2hpu.creds configuration file. See Optim High Performance Unload configuration for Big Data, Db2, Object Storage or PostgreSQL destinations for more information.
aws_s3
Specify this value to create credentials that will be used for automatic migration tasks that are performed against an Amazon S3 or S3 compatible destination, if the underlying upload command is based on the cURL tool. To establish a connection in such a case, a secret key must be specified in the db2hpu.creds configuration file. See Optim High Performance Unload configuration for Big Data, Db2, Object Storage or PostgreSQL destinations for more information.
azure
Specify this value to create credentials that will be used for automatic migration tasks that are performed against an Azure destination, if the underlying upload command is not based on the cURL tool. To establish a connection in such a case, an account key must be specified in the db2hpu.creds configuration file. See Optim High Performance Unload configuration for Big Data, Db2, Object Storage or PostgreSQL destinations for more information.
ibm_cos
Specify this value to create credentials that will be used for automatic migration tasks that are performed against an IBM Cloud Object Storage destination, if the underlying upload command is based on the cURL tool. To establish a connection in such a case, a secret key must be specified in the db2hpu.creds configuration file. See Optim High Performance Unload configuration for Big Data, Db2, Object Storage or PostgreSQL destinations for more information.
Syntax
--credentials remote | local | tsm | keystore | cloudant | couchdb | mongodb | warehouse | swift | postgresql | aws_s3 | azure | ibm_cos
Variable
None.
Default
None.
-d | --database
Use this option to specify the database that contains the tables that you want to unload. Optim High Performance Unload establishes a separate connection to the database.
Syntax
-d | --database database_name | database_name:url:port[:ssl]
Variable
database_name

Specify a valid Db2 database name. The default is the database name specified in the db2hpu.cfg file (see page 23). Optim High Performance Unload converts the name to uppercase unless you enclose it with double quotation marks, such as "database_name".

database_name:url:port[:ssl]

Specify a set of three or four values, separated by colons, which refer to the Db2 Warehouse database name, the URL corresponding to the associated server, the port number to be used for the connection to the Db2 Warehouse database, and optionally the mention of the usage of an SSL connection or not. If the connection to be used is an SSL one, the ‘yes’ value must be specified. The ‘no’ value must be used instead, this value being the default one. In order for this option specification to be parsed as expected, it is mandatory to specify the --warehouse command line option too. Otherwise the database specification is considered as a whole as a single Db2 database name, which of course would not correspond to a valid one.

Default
db2dbdft in db2hpu.cfg
--data-server-driver
Use this option to connect to a remote Db2 database by using IBM Data Server Driver. This tool allows one to connect to a remote Db2 database from a client machine where it is installed. In such a case, it is even not necessary to get a Db2 client installed on this machine to unload data from a remote Db2 database.
Syntax
--data-server-driver data_server_driver_path
Variable
data_server_driver_path
Specify the root path where IBM Data Server Driver is installed.
This command line option must be passed the root path where IBM Data Server Driver is installed. This path is used by Optim High Performance Unload for locating the library to be considered for mapping the Db2 APIs needed for its execution.
The use of IBM Data Server Driver can only be enabled if the -r command line option is specified too for a data unload from a remote Db2 database. In this case, the environment name specified as an argument of the -r command line option must be the same name as the one of the considered database.
--db2
Use the --db2 option to indicate whether or not a given order can or must be processed through Db2. You cannot specify the --db2 command-line option if you use a control file to run the unload.

NO indicates that if the order cannot be processed directly by Optim High Performance Unload (because the syntax of the SELECT statement is too complex or otherwise not supported), the unload run will fail.

YES indicates that if the SELECT statement is too complex to be handled directly by unload, the statement will be handed to Db2 to extract the rows.

FORCE indicates that Db2 will be used to extract the requested rows. If your SELECT statement contains DBCS words and you do not specify DB2 YES, Optim High Performance Unload will not be able to process the table.
Syntax
--db2 yes | no | force
Variable
None.
Default
None.
Attention: If the db2hpu.cfg file parameter, db2, is not set, then this option will default to YES. Otherwise, this control file option must be explicitly set to change the value set for db2 in the db2hpu.cfg file.
--debug
Use this option to run the Optim High Performance Unload executable which has been compiled with debug rather than optimized compiler options. This option must be used only under the direction of IBM.

The --debug option causes Optim High Performance Unload to run in debug mode.

Depending on the platform, Optim High Performance Unload installs only 32-bit executables, only 64-bit ones. In all cases, it will install two binaries: a normal one and a debug one. The debug version is both compiled with compiler debug options set, and contains additional Optim High Performance Unload trace calls.
Specify --debug to cause the debugging process to begin. See the --traces option in this section for information on running a trace without using the --debug option.
Attention: The --traces can be used with or without --debug and it controls whether or not a trace is taken. Either executable can produce traces, however, the trace is more extensive if --debug is also specified. The Optim High Performance Unload debug module will run correctly with just the --debug option without the --traces option, but Optim High Performance Unload will not produce any trace output.
Syntax
--debug
Variable
None.
Default
None.
--format
Use this option to choose the output format to be generated. You cannot specify the --format command-line option if you use a control file to run the unload.
Syntax
--format asc|del|delimited|dsntiaul|ixf|json|orc|parquet|xml
Variable
None.
Default
del
Tip: For a detailed description of each output format, refer to the FORMAT control file option description.
-f | --file
Use this option to specify the name of the control file that you want Optim High Performance Unload to use.

If you use a control file, one of these input options is required.

Syntax
-f | --file { [control_file_name|stdin] }
Variable
[control_file_name|stdin]
Specify a file name and path that is valid for the operating system on which Db2 is running. Specify the stdin keyword if you want Optim High Performance Unload to read control information from standard input.
Important: To test the syntax in your control file before actually unloading any data, use the --noexecute option with the --file option.
Default
None.
-h | --help
Use this option to display Optim High Performance Unload help.
Syntax
-h | --help
Variable
None.
Default
None.
-i | --instance
Use this option to specify the Db2 instance for the database that contains the tables that you want to unload. Optim High Performance Unload establishes a separate connection to the database.
Syntax
-i | --instance instance_name
Variable
instance_name

Specify a valid Db2 instance name. The default is the instance name specified in the db2hpu.cfg file.

Default
The db2instance setting in the db2hpu.cfg file if one exists. If the db2instance is not set, Optim High Performance Unload takes the value from the DB2INSTANCE environment variable. Otherwise, an error message is returned and the unload fails.
--import-credentials
Use this option to specify that credentials be imported from the file passed to it into the db2hpu.creds configuration file. The file passed to it must be one which contains valid credentials beforehand created into it through the usage of the --to-file option.
The file name passed to the --import-credentials option can have an absolute or relative path, and it must correspond to an already existing file. Its content is controlled in order to ensure that it contains valid credentials.
There can be several credentials into it. If it contains valid credentials, they are imported into the db2hpu.creds configuration file.
The following rules are applied when importing the credentials found into the file:
  • if the credentials do not exist in the db2hpu.creds configuration file, they are appended as new credentials into it
  • if the credentials already exist in the db2hpu.creds configuration file, they are updated
The advantage of the credentials import procedure is that it is an automatic one, not having any interactive prompting. It offers the ability of automating the credentials creation from a file prepared for this purpose.
Syntax
--import-credentials filename
Variable
filename
Default
None.
-k | --kill
Use this option to stop the running Optim High Performance Unload process on UNIX™ and Linux™ systems in a clean and secure way.
Restriction: This option is not available on Windows™ systems.
Syntax
-k | --kill process_ID
Variable
process_ID

Specify a valid ID of the process of the Optim High Performance Unload task that you want to stop.

Important: Make sure that you have the appropriate permissions to stop the specified process.
Default
None.
--list-backups
Use this option to enable the backups listing mode when launchingOptim High Performance Unload.
If the backups listing mode is enabled, only the backups involved are displayed. If these backups are located on disk, their directories locations are displayed too. After that report, Optim High Performance Unload will exit normally, and there will not be data unloading for this execution.
Syntax
--list-backups
Variable
None.
Default
None.
The --list-backups option must be used for an execution based on a control file usage. When the --list-backups option is used with a control file that contains an USING BACKUP clause, this USING BACKUP clause must not induce a FULL backup. It is not possible to use the --list-backups option if there is not an USING BACKUP DATABASE clause in each UNLOAD block of the control file. An USING COPY clause cannot be used in an UNLOAD block if the --list-backups option is specified. For more information about this option, review the following section: Listing backups
--load-only
Use this option to run only the processing of the Optim High Performance Unload that is related to the Db2 Load part.
For a standard unload execution, this option will limit the processing to a Db2 Load command generation.
For an automatic migration, only the Db2 Load execution will be performed.
Syntax
--load-only
Variable
None.
Default
None.
For more information about this option, review the following section: Limiting Optim High Performance Unload processing to the Db2 Load part
--memory-limit
To specify that Optim High Performance Unload can use as much memory as needed to complete a given task, set memory_limit to no. You can only set memory_limit if you have already set the allow_unlimited_memory configuration file parameter to yes.
Syntax
--memory-limit yes | no
Variable
None.
Default
Yes.
-m | --message
Use this option to specify the destination where information, warning, and error messages are written.
Syntax
-m | --message message_file
Variable
message_file

Specify a file name that is valid for the operating system on which Optim High Performance Unload is running. If you do not specify a file name, messages are sent to standard error (stderr).

If you specify the name of an existing file, Optim High Performance Unload appends the new message information to the end of the file. The program does not overwrite any information that is already in the file.

Default
stderr
--monitor
Use this option to monitor the progression of a specific running Optim High Performance Unload execution, by specifying to it the process ID of the task to be monitored. This process ID can be determined from the execution report of the task to be monitored, because it is displayed when this task is launched, thanks to an informational message.
Syntax
--monitor pid
Variable
pid
Default
None.
-n | --noexecute
Use this option to test the syntax of your control-file script. The --noexecute option sends output to the message file or to standard error (stderr) instead of to standard out (stdout). If you do not specify this option, the default behavior is to run the control file script.
Note: There is no control file option that is equivalent to the --noexecute command-line option.
Syntax
-n | --noexecute
Variable
None.
Default
None.
-o | --output
Use this option to specify the full output file name, including the path.

Optim High Performance Unload unloads all data into the specified file. By default, if this option is missing and there is no OUTFILE clause in a control file, then all lines will be sent to standard out (stdout).

Note: If you use the control file option, --file, any output clause in that control file overrides what is specified with this option.
Syntax
-o | --output output_file_name
Variable
output_file_name
Default
Standard out (stdout)
--packed-decimal
Use this option to unload decimal data in packed representation. This option can only be used in combination with the --format option with the asc parameter. When unloading in ASC format, and this option is not specified, decimal data is unloaded in extended format. You cannot specify the --packed-decimal command-line option if you use a control file to run the unload.
Syntax
--packed-decimal
Variable
None.
Default
None.
--partition
Use this option to indicate which database partitions will participate in the unload for a partitioned database environment. This option is ignored in a single-partition environment. In the following information, n1 and n2 represent database partition numbers.
Requirement: For non-partitioned Db2 instances, if the database partition number is specified, then you must use either PART(ALL) or PART(0).
Syntax
--partition all | current | n1[{-|,} n2...]
Variable
all | current | n1[{-|,} n2...]

Use the default value of all to indicate that data from the source table will be unloaded from all of the nodes defined for the source table. (The source table is the table specified either on the command line or in the control file.)

Enter current to indicate that source table data will be unloaded from all the database partitions on the machine whereOptim High Performance Unload is started.

Enter n1[{-|,} n2...] to restrict the source table data unloaded to the nodes listed. This command-line option overrides nodes that are listed in the control file. The nodes must meet the following conditions:
  • The database partitions must be defined in the db2nodes.cfg file.
  • The database partitions must follow the Db2 node number syntax.
  • The database partitions must be specified only once.
  • The database partitions must be in the source table's node group.
Default
all
-r | --remote
Use this option when you want to run Optim High Performance Unload for unloading data from a remote Db2 instance, either cataloged locally on a Db2 client, or through the use of IBM Data Server Driver. Because Optim High Performance Unload is not installed on the remote data servers, Optim High Performance Unload will process remotely only with the DB2 FORCE option, even if it is not explicitly specified. If DB2 is set to YES or NO, an error is returned.
Attention: If the remote Db2 instance has its database manager configured for authentications at the server level, you need to create user credentials with the --credentials command line option.
Syntax
-r | --remote name
Variable
Specify the name of the remote node that is cataloged locally if using a Db2 client, or specify the data source name if using IBM Data Server Driver.
Default
None.
--select
Use this option when you want to issue a select request without having to embed it into a full Optim High Performance Unload request. This option is not compatible with the --table option; however, it can be used with the --file option if the specified control file contains only a select request.
Syntax
--select
Default
None.
Examples
The following example uses a separate control file file.ctl that contains a select request such as 'select * from employee'. The command will execute the select request and output the result into the file.out file:
db2hpu -f file.ctl -i db2inst1 -d sample -o file.out --select

The following example uses a select request taken from the stdin. The command will select all of the entries from the employee table of the sample database and output the results into the file.out file:

echo 'select * from employee' | db2hpu -f -i db2inst -d sample -o file.out --select
-s | --standalone
Use this option when you are extracting data from a backup taken from a different system. You can also use the option to repartition data that has been taken from a backup in stand-alone mode. This option tells Optim High Performance Unload not to look for the Db2 instance files and directories on this system. This option modifies the meaning of the -i option. When -s is specified, -i should be specified with the Db2 instance owner id of the instance which took the backup.

The optional parameter gives you the ability to unload data from a Db2 V9 compressed backup image in standalone mode. The parameter allows Optim High Performance Unload to find the Db2 compression library, which is needed to process an unload from a Db2 V9 compressed backup image.

Syntax
-s | --standalone db2_installation_path
Variable
Specify the Db2 installation path.
Default
None.
-t | --table
Use this option to specify the name of the table that contains data to unload. One of the following options is required: -f, -t, or -v. Only one table name can be specified from the command line. If you are providing unload instructions in a control file, you must supply the table name in the control file. You can use this option to unload Db2 ™ summary tables.
If the table name is in lowercase, you cannot specify db2hpu -t "table" because the double quotation mark (") is a special character for the command interpreter: Instead, you must specify the following command:
db2hpu -t \"table\"
The double quotation mark characters will then be accounted for, and the table name will be accepted in lowercase.
Syntax
-t | --table [[SYSTEM.]owner.] table_name
Variable
owner and table_name

If you are not the owner of the Db2 table, specify a value for owner, followed by the name of the table that you want to unload. Optim High Performance Unload converts the name to uppercase unless it is enclosed by double quotation marks, such as: "table_name".

Default
None.
--tenant
Use this option to specify a tenant to be considered for a given task unloading data from a local database.
The value which can be specified to it must be a Db2 identifier corresponding to an existing tenant name. Its default value is ‘system’, which corresponds to the tenant called SYSTEM being the default one for Db2 as well. When this option is specified, the task executed can only process tables being part of the tenant specified. If a tenant is specified in the control file with the TENANT clause, it is overridden by the tenant specified in the command line. This option can be used only if the Db2 environment is a V12.1 one at least, and it cannot be specified together in the command line with the --remote or the --warehouse options.
Syntax
--tenant tenant_name
Variable
tenant_name

Specify a Db2 identifier corresponding to an existing tenant name.

Default
system
--to-file
Use this option to specify that credentials be created into the file passed to it, instead of creating them into the db2hpu.creds configuration file.
The file name passed to the --to-file option can have an absolute or relative path. If the file already exists, its content is controlled in order to ensure that it contains valid credentials, and then the new credentials are appended to it. If the file does not exist, it is created with the new credentials. The --to-file option cannot be specified without the --credentials option.
Syntax
--to-file filename
Variable
filename
Default
None.
--traces
Use this option when IBM asks you to activate an Optim High Performance Unload serviceability trace. This option must be used only under the direction of IBM.
When you start a trace, Optim High Performance Unload stores the trace files in the directory from which you started the db2hpu command and provides a file for each keyword that is processed. The file names are KEYWORD_nnnnn where KEYWORD is a trace keyword and nnnnn is the process id of the unload that ran. For example:
$ARGS_456789
You will be asked to tar the files, and send them to IBM Software Support.

Trace files are created for all trace keywords that apply. The keywords are contained in the db2hpu.trace file that Optim High Performance Unload creates during installation. The trace runs using the db2hpum file, an unload executable file compiled in optimized mode.

The traces option can also be used together with the --debug option. If the -traces option is used with the --debug option, the keywords used will be the ones specified in the db2hpu.debug file. This file contains more trace keywords. Moreover, when the --debug is specified, the trace runs using the db2hpum_debug file, an unload executable file compiled in debugging mode.

Syntax
--traces
Variable
None.
Default
None.
--umask
Use the --umask option to override target system permissions when you perform an automatic data migration. By default, the umask value for remote unloads is the umask value of the xinetd (or inetd) daemon starter. The permissions of the generated files are then restricted by the root umask. When you perform an automatic data migration, this restriction can be problematic because the Db2 Load needs data files to be readable by the instance user of the target database. In some cases (depending on the system configuration) the applied umask is too restrictive, and generated files are not visible by the target database instance user. The Optim High Performance Unload umask feature allows you to override this restrictive umask and generate files with suitable permissions.
The umask feature definition is based on the UNIX umask definition. Three octal digits are used to define permissions for user/group/other masks. The umask value can take octal values between 000 and 777. The recommended umask value for Optim High Performance Unload is 022. Output files will be generated with correct rights to perform manual or automatic migration.
Restriction:
  • The umask parameter applies to remotely generated files only.
  • The umask definition in Optim High Performance Unload does not allow you to generate data files with every kind of access rights. Optim High Performance Unload will always query data files generation with 644 access rights (rw-r--r--) . If the umask is more restrictive, the access rights will be reduced. If the umask is less restrictive, the access rights will remain rw-r--r--.
Syntax
--umask umask_value
Variable
umask_value

The umask octal value between 000 and 777.

Default
None. Optim High Performance Unload will use the target system's umask value.
Example
--umask 022
-v | --version
Use this option with the db2hpu command if you are not sure which version of Optim High Performance Unload is running. The --version option displays the Optim High Performance Unload version that is currently running, the release, and the modification number in the following format:
vv.rr.mmm.ii
The modification number (mmm) is the number of the most recent Fix Pack that was applied, and ii is the interim fix number.
Syntax
-v | --version
Variable
None.
Default
None.
--warehouse
Use this option when one wants to run Optim High Performance Unload with a remote Db2 Warehouse database as a data source. This option is only compatible with the DB2 FORCE option, because the associated SQL statement against such a Db2 Warehouse database cannot be performed another way than through the Db2 SQL engine. If the DB2 option is not explicitly specified, its default value becomes FORCE when this command line option is used. If the DB2 option is set to YES or NO, an error is returned.
Syntax
--warehouse alias
Variable
alias
Specify the name of the warehouse section into the user credentials file to be taken into account for connecting to the remote Db2 Warehouse database.
Default
None.