LOADDEST

The LOADDEST clause allows to specify the destination to be considered relatively to a set of data to be transferred towards it. The LOADDEST clause is taken into account only if a data migration is performed or if a LOADFILE clause is specified too. Otherwise the LOADDEST clause is ignored. There are five types of environments supported: Db2®, NoSQL, Hadoop, Object Storage and PostgreSQL environments.

When specifying a destination with the LOADDEST clause, it is generally needed to configure various parameters associated to this destination. These parameters must be configured within the 'db2hpu.dest' configuration file. For more information, see Optim High Performance Unload configuration for Big Data, Db2, Object Storage or PostgreSQL destinations.

Specifying a LOADDEST clause is mandatory when a data upload command has to be prepared, if the data file to be handled is of JSON or XML output format.

DB2 option

It allows to specify that the destination is a Db2 environment. The supported Db2 databases are the standard ones, the remote ones cataloged locally and the Db2 Warehouse ones. There is no specific keyword to be specified for a standard Db2 database. Concerning the remote Db2 databases cataloged locally and the Db2 Warehouse databases, they can be respectively specified through the usage of the REMOTE and WAREHOUSE keywords.

Standard databases
There is no specific keyword to be added. It allows to specify that the destination is a standard Db2 database. This is the default destination, which is considered if no explicit LOADDEST clause is specified The Db2 destination is consistent with the following output formats: DEL, DELIMITED, ASC, IXF, DSNTIAUL and XML.
When a Db2 Load command is written into a file specified through the LOADFILE clause, such a file contains:
  • an optional Db2 connection step, if the WITH STANDARD AUTH option has been specified into a LOADDEST clause
  • the Db2 Load command itself
  • an optional Db2 disconnection step, if the WITH STANDARD AUTH option has been specified into a LOADDEST clause
When a Db2 Load command has to be prepared with an authentication step, its content depends on the specification of the alias option or not. If the alias option is specified, this step is based on the usage of various parameters which values have to be set within the 'db2hpu.dest' configuration file, into its associated section for the Db2 destination. If there is no alias option specified, there is nothing to be set within the 'db2hpu.dest' configuration file.
When considering an automatic migration, credentials of local type must be defined into the 'db2hpu.creds' file.
"alias" option
It is an optional one. The purpose of this option is to support several sections relative to a Db2 destination configured within the configuration file for destinations. The alias is the mean to distinguish them. It must correspond to the alias set into a section relative to the destination type considered configured into the 'db2hpu.dest' configuration file. Its value is case sensitive. When specifying such an option, Optim High Performance Unload will search into the 'db2hpu.dest' configuration file a section called Db2 containing an 'alias' parameter set with the alias considered.
When considering a scenario of an automatic data migration towards a standard Db2 destination, it is necessary to specify either a corresponding LOADDEST clause with an appropriate alias, or a TARGET ENVIRONMENT clause.
When considering a standard Db2 destination, a LOADDEST clause with an alias and a TARGET ENVIRONMENT clause cannot be specified together.
Remote databases cataloged locally
The REMOTE keyword must be specified. It allows to specify that the destination is a remote Db2 environment cataloged locally. The remote Db2 destination is consistent with the following output formats: DEL, DELIMITED, ASC and IXF.
If both LOADFILE and OUTFILE clauses are associated to a remote Db2 destination, the OUTFILE clause must contain an absolute path specification, because a Db2 Load command for a remote database must refer to an absolute file name.
When a Db2 Load command is written into a file specified through the LOADFILE clause, such a file contains:
  • an optional Db2 connection step, if the WITH STANDARD AUTH option has been specified into a LOADDEST clause
  • the Db2 Load command itself
  • an optional Db2 disconnection step, if the WITH STANDARD AUTH option has been specified into a LOADDEST clause
When a Db2 Load command has to be prepared with an authentication step, this step is based on the usage of various parameters which values have to be set within the 'db2hpu.dest' configuration file, into its section for the remote Db2 destination.
When considering an automatic migration, credentials of remote type must be defined into the 'db2hpu.creds' file.
Note: the WITH option of the LOADMODE clause is inconsistent with a remote Db2 destination.
"node" option
It is an optional one, and it can be specified only with the REMOTE option. The purpose of this option is to support several sections relative to a remote Db2 node configured within the configuration file for destinations. The node name is the mean to distinguish them. It must correspond to the name of a remote Db2 node configured into the 'db2hpu.dest' file. Its value is case sensitive. When specifying such an option, Optim High Performance Unload will search into the 'db2hpu.dest' configuration file a section called RemoteDB2 containing a 'node' parameter set with the node name considered.
If this option is not specified, the first section called RemoteDB2 found in the 'db2hpu.dest' configuration file is taken into account by the Optim High Performance Unload task.
Db2 Warehouse databases
The WAREHOUSE keyword must be specified. It allows to specify that the destination is a Db2 Warehouse environment. The WAREHOUSE option is only consistent with the DEL and DELIMITED output formats.
Both IBM Db2 Warehouse on Cloud environments (IBM Bluemix®) and IBM Db2 Warehouse environments are supported. All the information about these environments is available here: https://www.ibm.com/us-en/marketplace/cloud-data-warehouse
If an upload command specified through the usage of a LOADFILE clause has to be generated, two tools can be used for it:
  • CLPPlus, which can be found either in the binaries directory of the Db2 installation, or in the Db2 Warehouse driver package (downloadable on IBM website).
  • cURL, which can be downloaded on its official website http://curl.haxx.se/download.html
In such a case, the file generated contains:
  • an optional authentication step, if a WITH STANDARD AUTH option has been specified into the LOADDEST clause
  • the upload command itself
If an automatic migration is performed, its underlying execution can be based on two methods:
  • the usage of the Db2 Load API
  • the usage of the cURL tool
When an upload command has to be prepared for a Db2 Warehouse destination, its generation is based on the usage of various parameters which values have to be set within the 'db2hpu.dest' configuration file, into its section for the Db2 Warehouse destination considered.
"alias" option
It is an optional one. The purpose of this option is to support several sections relative to a given destination within the configuration file for destinations. The alias is the mean to distinguish them. It must correspond to the alias set into a section relative to the destination type considered configured into the 'db2hpu.dest' file. Its value is case sensitive. When specifying such an option, Optim High Performance Unload will search into the 'db2hpu.dest' configuration file a section corresponding to the destination chosen and containing an 'alias' parameter set with the alias considered.
Otherwise, if this option is not specified, the first section corresponding to the destination chosen found in the 'db2hpu.dest' configuration file is taken into account by the Optim High Performance Unload task.
WITH STANDARD AUTH option:
When considering an unload task, relative to a standard Db2 destination, or a remote Db2 destination, or a Db2 Warehouse destination based on the usage of the CLPPlus tool, the WITH STANDARD AUTH option affects the content of the file specified with the LOADFILE clause.
If a LOADDEST clause has been specified with a WITH STANDARD AUTH option and if a LOADFILE clause has been specified too, an authentication step is added into the file generated.
This authentication step consists in establishing a connection to the Db2 database concerned. When executing the command file generated, the user's password will be prompted, and the user involved must have the appropriate permissions to perform the expected Db2 Load command. After the Db2 Load command execution, a disconnection from the Db2 database is performed.
Depending on the nature of the destination, this authentication step takes the following information into account:
  • for a Db2 destination:
    • if an alias has been specified:
      • a database name is considered: when it is configured, it is taken from the dbname parameter into the associated section in the 'db2hpu.dest' configuration file. Instead, it is taken from the Optim High Performance Unload control file (GLOBAL block) or the -d command line option.
      • a user name is considered: when it is configured, it is taken from the user parameter into the associated section in the 'db2hpu.dest' configuration file. Instead, the user having launched Optim High Performance Unload is taken.
    • if no alias has been specified:
      • a database name is considered: it is taken from the Optim High Performance Unload control file (GLOBAL block) or the -d command line option.
      • a user name is considered: when it is configured, it is taken from the 'db2hpu.creds' file within a local section for the current Db2 instance. Instead, the user having launched Optim High Performance Unload is taken.
  • for a remote Db2 destination:
    • a database name is considered: it is taken from the dbname parameter into the associated section in the 'db2hpu.dest' configuration file.
    • a user name is considered: when it is configured, it is taken from the user parameter into the associated section in the 'db2hpu.dest' configuration file. Instead, the user having launched Optim High Performance Unload is taken.
  • for a Db2 Warehouse destination:
    • a user name is considered: it is taken from the user parameter into the associated section in the 'db2hpu.dest' configuration file.
    • if the upload command is based on the CLPPlus tool usage, an URL, a port number and a database name are considered: they are taken respectively from the url, port and dbname parameters into the associated section in the 'db2hpu.dest' configuration file.
When considering an unload task, relative to a Db2 Warehouse destination based on the usage of the cURL tool, the WITH STANDARD AUTH option affects the content of the file specified with the LOADFILE clause. If a LOADDEST clause has been specified with a WITH STANDARD AUTH option and if a LOADFILE clause has been specified too, a preliminary step is added for an obfuscated prompting of a password.
When performing an automatic migration to a standard or a remote Db2 database, it is not mandatory to specify the authentication option, but it is internally set by default. When performing an automatic migration to a Db2 Warehouse destination, it is mandatory to specify a standard authentication method. Having an authentication method applying in such a migration case is mandatory for security reasons. Creating appropriate credentials for the destination considered is mandatory.

NOSQL_DB option

It allows to specify that the destination is a NoSQL environment. The NOSQL_DB option is only consistent with the JSON output format.

The supported NoSQL databases are Cloudant, CouchDB and MongoDB. They can be respectively specified through the usage of the CLOUDANT, COUCHDB and MONGODB keywords. All the information about these NoSQL databases is available here:
When the NOSQL_DB option is specified in the LOADDEST clause, the JSON output files generated are consistent with the NoSQL destination chosen, and contain additional appropriate header and footer. If an upload command has to be prepared too, its nature is also consistent with this same destination chosen. For a NoSQL destination, the command in question is based on the usage of an upload tool depending on the effective destination chosen:
For a NoSQL destination, when an upload command is written into a file specified through the LOADFILE clause, such a file contains:
  • an optional authentication step, if a WITH STANDARD/KERBEROS AUTH option has been specified into the LOADDEST clause
  • the upload command itself

When an upload command has to be prepared for a NoSQL destination, its generation is based on the usage of various parameters which values have to be set within the 'db2hpu.dest' configuration file, into its section for the NoSQL destination.

"alias" option
It is an optional one. The purpose of this option is to support several sections relative to a given destination within the configuration file for destinations. The alias is the mean to distinguish them. It must correspond to the alias set into a section relative to the destination type considered configured into the 'db2hpu.dest' file. Its value is case sensitive. When specifying such an option, Optim High Performance Unload will search into the 'db2hpu.dest' configuration file a section corresponding to the destination chosen and containing an 'alias' parameter set with the alias considered.
Otherwise, if this option is not specified, the first section corresponding to the destination chosen found in the 'db2hpu.dest' configuration file is taken into account by the Optim High Performance Unload task.

HADOOP option

It allows to specify that the destination is an Hadoop environment. The HADOOP option is consistent with the following output formats: DEL, DELIMITED, JSON and XML.

There are three Hadoop destinations supported by explicit keywords associated to them. These destinations are HBase, Hive and HDFS. Each one can be respectively chosen by explicitly using the HBASE, HIVE and HDFS keywords. One can also choose any existing Hadoop destination (not only one of these three ones above), by specifying the usage of a MapReduce program of his choice, associated to the destination to be considered. This can be specified through the usage of the MAPREDUCE keyword.

In order to work with an Hadoop destination, an Hadoop environment must be installed and must be working fine on the machine where the upload commands towards this destination are to be launched. The following packages, which are not always installed by default with an Hadoop environment, might be needed, depending on the intended usage: It is necessary to specify a LOADDEST clause with the HADOOP option when an upload command has to be prepared for an Hadoop destination. If not, specifying the LOADDEST clause with an Hadoop destination is useless, because it has no effect at all on the content of the output files generated.
When an upload command is prepared, its nature is consistent with the destination chosen. For an Hadoop destination, the command in question is based on the usage of an upload tool depending on the effective destination keyword chosen:
  • HBASE: the tool involved is hadoop or pig
  • HIVE: the tool involved is beeline (compatible with the Hive 0.14 version and greater)
  • HDFS: the tool involved is hdfs
  • MAPREDUCE: the tool involved is hadoop
For an Hadoop destination, when an upload command is written into a file specified through the LOADFILE clause, such a file contains:
  • an optional authentication step, if a WITH KERBEROS AUTH option has been specified through the LOADDEST clause,
  • if the LOADDEST clause is specified with another destination keyword than HDFS, a step copying the data file to be uploaded to a temporary file located on the associated HDFS file system,
  • the upload command itself,
  • if the LOADDEST clause is specified with the destination keyword HBASE or MAPREDUCE, a step removing the temporary file created in the previous copying step (this step does not exist when the HIVE destination keyword is specified, because the temporary file is automatically removed by the upload command executed in this case).

When an upload command has to be prepared for an Hadoop destination, its generation is based on the usage of various parameters which values have to be set within the 'db2hpu.dest' configuration file, into its section for the Hadoop destination.

When considering the HBase destination, if an upload command has to be generated, it is mandatory to specify an INTO TABLE(S) clause with its WITH COLUMNS option. In order to upload data to an HBase table, each column of the source table concerned must be associated to a column name and a column family defined within the target HBase table, a column family being a group of columns of an HBase table. These associations can be specified through the usage of the WITH COLUMNS option in question. It must contain the specification of a default column family. If needed, it can also contain a list of optional associations, each association relating a column of the source table concerned to a column (and its column family) within the target HBase table. Any column of the source table concerned not explicitly associated to a column family is associated to a column of the same name accompanied by the default column family. A source column name specified at this level must be concerned by the task involved for the associated table.

When considering the Hive destination, if the output format involved is DELIMITED, pay attention to the format chosen for the DATE and TIMESTAMP data types: the DATE_C option must be used for the DATE columns formatting, and the TIMESTAMP_G option must be used for the TIMESTAMP columns formatting.

"alias" option
It is an optional one. The purpose of this option is to support several sections relative to a given destination within the configuration file for destinations. The alias is the mean to distinguish them. It must correspond to the alias set into a section relative to the destination type considered configured into the 'db2hpu.dest' file. Its value is case sensitive. When specifying such an option, Optim High Performance Unload will search into the 'db2hpu.dest' configuration file a section corresponding to the destination chosen and containing an 'alias' parameter set with the alias considered.
Otherwise, if this option is not specified, the first section corresponding to the destination chosen found in the 'db2hpu.dest' configuration file is taken into account by the Optim High Performance Unload task.

OBJECT_STORAGE option

It allows to specify that the destination is an Object Storage environment. The supported destinations are Amazon EC2, Amazon S3, OpenStack Swift, Microsoft Azure and a remote file system. They can be respectively specified through the usage of the AWS_EC2, AWS_S3, SWIFT, AZURE and FILESYSTEM keywords.

AWS_EC2 option
The AWS_EC2 option is consistent with the following output formats: DEL, DELIMITED, JSON and XML.
All the information about this Amazon web service is available here: https://aws.amazon.com/ec2
If an upload command specified through the usage of a LOADFILE clause has to be generated, or if an automatic migration is performed, the tool involved is scp. If the upload command is executed on a Windows platform, one has to install the appropriate package in order to get the scp command working fine (such a binary is included in OpenSSH).
When an upload command has to be prepared for an Amazon EC2 destination, its generation is based on the usage of various parameters which values have to be set within the 'db2hpu.dest' configuration file, into its section for the Amazon EC2 destination considered.
AWS_S3 option
The AWS_S3 option is consistent with the following output formats: DEL, DELIMITED, JSON and XML.
All the information about this Amazon web service is available here: https://aws.amazon.com/s3
If an upload command specified through the usage of a LOADFILE clause has to be generated, or if an automatic migration is performed, two tools can be used:
When an upload command has to be prepared for an Amazon S3 destination, its generation is based on the usage of various parameters which values have to be set within the 'db2hpu.dest' configuration file, into its section for the Amazon S3 destination considered.
SWIFT option
The SWIFT option is consistent with the following output formats: DEL, DELIMITED, JSON and XML.
The supported Swift environments are the IBM Bluemix® ones in the Cloud or the OpenStack Swift ones.
All the information about cloud Swift powered by IBM Bluemix is available here: https://www.ibm.com/cloud/swift
All the information about local Swift powered by OpenStack is available here: https://docs.openstack.org/swift/latest/
If an upload command specified through the usage of a LOADFILE clause has to be generated, or if an automatic migration is performed, the Swift client tool must be installed. It requires a Python environment on the machine concerned. In order to do so, the following command should be typed:
pip install python-swiftclient
This command is the same for both Unix and Windows platforms.
When an upload command has to be prepared for a Swift destination, its generation is based on the usage of various parameters which values have to be set either within the 'db2hpu.dest' configuration file, into its section for the Swift destination considered, or through various Swift environment variables.
AZURE option
The AZURE option is consistent with the following output formats: DEL, DELIMITED, JSON and XML.
All the information about the Microsoft Azure environment is available here: https://azure.microsoft.com/.
If an upload command specified through the usage of a LOADFILE clause has to be generated, or if an automatic migration is performed, two tools can be used:
When an upload command has to be prepared for a Microsoft Azure destination, its generation is based on the usage of various parameters which values have to be set within the 'db2hpu.dest' configuration file, into its section for the Microsoft Azure destination considered.
FILESYSTEM option
It allows to specify that the destination is a file system environment where files can be uploaded, based on the usage of the scp command. The FILESYSTEM option is consistent with the following output formats: DEL, DELIMITED, JSON and XML.
If an upload command specified through the use of a LOADFILE clause has to be generated, or if an automatic migration is performed, the command involved is scp. If the upload command is executed on a Windows platform, one has to install the appropriate package in order to get the scp command working fine. It is included in OpenSSH.
When an upload command has to be prepared for a file system destination, its generation is based on the use of various parameters which values have to be set within the 'db2hpu.dest configuration file, into its section for the file system destination considered.
"alias" option
It is an optional one. The purpose of this option is to support several sections relative to a given destination within the configuration file for destinations. The alias is the mean to distinguish them. It must correspond to the alias set into a section relative to the destination type considered configured into the 'db2hpu.dest' file. Its value is case sensitive. When specifying such an option, Optim High Performance Unload will search into the 'db2hpu.dest' configuration file a section corresponding to the destination chosen and containing an 'alias' parameter set with the alias considered.
Otherwise, if this option is not specified, the first section corresponding to the destination chosen found in the 'db2hpu.dest' configuration file is taken into account by the Optim High Performance Unload task.

POSTGRESQL option

It allows to specify that the destination is a PostgreSQL environment. The POSTGRESQL option is consistent with the DEL and DELIMITED output formats. All the information about PostgreSQL is available here: https://www.postgresql.org/

If an upload command specified through the usage of a LOADFILE clause has to be generated, or if an automatic migration is performed, the psql PostgreSQL interactive terminal must be installed through a complete package of PostgreSQL. The way to install it depends on the platform used: https://www.postgresql.org/download/.

When an upload command is written into a file specified through the LOADFILE clause, such a file contains:
  • an optional authentication step, if a WITH STANDARD AUTH option has been specified into the LOADDEST clause
  • the upload command itself

When an upload command has to be prepared for a PostgreSQL destination, its generation is based on the usage of various parameters which values have to be set within the 'db2hpu.dest' configuration file, into its section for the PostgreSQL destination considered.

"alias" option
It is an optional one. The purpose of this option is to support several sections relative to a given destination within the configuration file for destinations. The alias is the mean to distinguish them. It must correspond to the alias set into a section relative to the destination type considered configured into the 'db2hpu.dest' file. Its value is case sensitive. When specifying such an option, Optim High Performance Unload will search into the 'db2hpu.dest' configuration file a section corresponding to the destination chosen and containing an 'alias' parameter set with the alias considered.
Otherwise, if this option is not specified, the first section corresponding to the destination chosen found in the 'db2hpu.dest' configuration file is taken into account by the Optim High Performance Unload task.

WITH STANDARD/KERBEROS AUTH option for the NOSQL_DB, HADOOP, SWIFT and POSTGRESQL destinations

The WITH STANDARD/KERBEROS AUTH option allows to specify the authentication method to be applied for an upload of data towards a given destination.

Usually, accessing to a Big Data environment is protected by an authentication mechanism. As a result, in order to succeed in uploading data to it, an appropriate authentication step is needed. The WITH STANDARD/KERBEROS AUTH option offers the ability to refer to the authentication mechanism involved. There are two types of methods supported, respectively through the two following keywords:
  • the STANDARD keyword: it refers to a standard method, based on the usage of a traditional user and password combination
  • the KERBEROS keyword: it refers to a Kerberos method, based on the usage of a Kerberos principal

When performing an automatic migration to a NoSQL or Hadoop destination, it is mandatory to specify a standard or Kerberos authentication method. Specifying an authentication method in such a migration case is mandatory for security reasons. When migrating with a standard authentication method to a NoSQL destination, creating appropriate credentials for the destination considered is mandatory too.

When performing an automatic migration to a Swift or PostgreSQL destination, it is mandatory to specify a standard authentication method. Specifying an authentication method in such a migration case is mandatory for security reasons. Creating appropriate credentials for the destination considered is mandatory too.

When generating an upload command for a NoSQL, Hadoop, Swift or PostgreSQL destination, if there is no reference to an authentication method, the upload command is generated without any preliminary step for authentication.

Constraints

For the NoSQL, Hadoop, Db2 Warehouse and PostgreSQL destinations, there are constraints usage which need to be mentioned:
  • UTF-8 encoding: Optim High Performance Unload must use an UTF-8 encoding when generating files aimed to be uploaded to these destinations. When run for a NoSQL or an Hadoop destination, the reason is that they do not support any other encoding than UTF-8. When run for a Db2 Warehouse destination, the reason is that the underlying Db2 database is implicitly created with the UTF-8 encoding. In consequence, in order to avoid problems, for all of these destinations, if an encoding specification different from UTF-8 is made into a control file for a NoSQL, Hadoop or Db2 Warehouse destination, it is ignored and internally forced to UTF-8
  • When considering an Hive destination, most of the date and timestamp formats are inconsistent with this destination date and timestamp data types. Here is a list of formats for these data types existing in Optim High Performance Unload, and supported by the Hive destination:
    • The unique date data type format supported is DATE_C.
    • The timestamp data type formats supported are TIMESTAMP_A and TIMESTAMP_G. If the TIMESTAMP_A format is considered, its time and timestamp separators must be changed in order to be consistent with an Hive destination. In this case, the time separator must be specified with the TIMEDELIM clause set with the ':' value, and the timestamp separator must be specified with the TIMESTAMPDELIM clause set with the space character for its first value, its second value being ignored.

    In order to change such a format, the output format for the output file must be set to DELIMITED.

  • When considering a Db2 Warehouse destination with the usage of the CLPPlus tool, most of the date, time and timestamp formats are inconsistent with this tool. Here is a list of formats for these data types existing in Optim High Performance Unload, and supported by this tool:
    • The unique date data type format supported is DATE_C.
    • The time data type formats supported are TIME_A and TIME_B. The default separator for these formats is the dot character. The colon character is also supported as a separator for the time values by Db2 Warehouse. Its usage can be specified with the TIMEDELIM clause set with the ':' value.
    • The timestamp data type formats supported are TIMESTAMP_A, TIMESTAMP_B and TIMESTAMP_G.

    In order to change such a format, the output format for the output file must be set to DELIMITED.

  • When considering a PostgreSQL destination, most of the date, time and timestamp formats are inconsistent with this destination date, time and timestamp data types. Here is a list of formats for these data types existing in Optim High Performance Unload, and supported by the PostgreSQL destination:
    • The unique date data type format supported is DATE_C.
    • The unique time data type format supported is TIME_F.
    • The unique timestamp data type format supported is TIMESTAMP_G.
    In order to change such a format, the output format for the output file must be set to DELIMITED.
  • Naming convention applied to files containing upload commands generated on Windows platforms: on these platforms, the name of a file which must be executed must be terminated with a “.bat” suffix, in order to succeed in executing it. When generating an upload command file on a Windows platform, if the name specified for it through the LOADFILE clause does not end with such a “.bat” suffix, this suffix is automatically added to the name used when creating the corresponding upload command file.
  • Limitations relative to the usage of a user-defined MapReduce program: if one wants to use a MapReduce of his own for the generation of an upload command, one can do so specifying a LOADDEST clause with its MAPREDUCE keyword, and setting its associated 'command' parameter within the db2hpu.dest configuration file, into its section referring to a MapReduce destination. The string to be specified for this parameter must follow a specific pattern. It must:
    • start with the name of the MapReduce program considered, which must be packaged as a jar file
    • continue with an optional part containing potential options to be given to the MapReduce program
    • expect after it the specification of the data file to be uploaded, which is going to be added to it automatically when generating the associated upload command
Here is a concrete example illustrating this usage. Considering the following LOADDEST clause:
LOADDEST(HADOOP MAPREDUCE)
with an associated destination section into the db2hpu.dest configuration file like the following one:

 [MapReduce]
  command=/tmp/myMR.jar --input
  hdfspath=/tmp/
       
where:
  1. the MapReduce program is called myMR.jar
  2. the --input option is an option of the MapReduce program expecting a subsequent specification of the data file to be handled
with an OUTFILE clause like the following one:
OUTFILE("outfile")
The upload command generated for such a case would look like:
hadoop jar /tmp/myMR.jar --input /tmp/outfile