What's new in Optim High Performance Unload
This topic summarizes the technical changes for this edition.
Optim™ High Performance Unload V12.1 GA includes the following new features:
- Support of Db2® V12.1
- Optim High Performance Unload can support Db2 V12.1 databases.
- Support of Db2 backups taken on a Cloud Object Storage
- Optim High Performance Unload can support Db2 backups taken on a Cloud Object Storage. Optim High Performance Unload can unload data from backups of this type.
- Support of IBM Cloud Object Storage as a destination
- Optim High Performance Unload can be used for preparing data for a subsequent upload of it, or for migrating data towards an IBM Cloud Object Storage destination, through an appropriate specification of the LOADDEST clause.
- Support of S3 environments not being Amazon S3 ones as a destination
- Optim High Performance Unload can be used for preparing data for a subsequent upload of it, or for migrating data towards an S3 destination not being an Amazon S3 one, through an appropriate specification of the LOADDEST clause.
- Support of all the destinations compatible with the S3 protocol for the external tables relying on a file located in an S3 environment
- Optim High Performance Unload can interface with the handling of Db2 external tables relying on a file located on an S3 environment not being an Amazon S3 one.
- Support of Parquet as an output format
- Optim High Performance Unload can generate Parquet files by using the PARQUET output format.
- Support of ORC as an output format
- Optim High Performance Unload can generate ORC files by using the ORC output format.
- Ability of monitoring the progression of the activities associated to a running task
- Optim High Performance Unload can be launched for monitoring the progress of an already running and distinct task, by specifying the --monitor command line option.
- Ability of compressing the data transferred through the network for writing it into a remote file
- It is now possible to enable the compression of the data transferred through the network by Optim High Performance Unload for being written into a file on a distinct machine, by setting the configuration parameter called network_compression.
- Improvement of the security around the configuration of the secret key of an S3 environment
- The secret key related to an S3 environment can be stored as a credential into the Optim High Performance Unload configuration file for credentials.
- Improvement of the security around the configuration of the account key of an Azure environment
- The account key related to an Azure environment can be stored as a credential into the Optim High Performance Unload configuration file for credentials.
- Use of pipes for a data migration based on the AWS CLI tool towards an S3 environment
- Optim High Performance Unload can use pipes for a data migration based on the AWS CLI tool towards an S3 environment.
- Ability of selecting the range-partitions of a range-partitioned table to be processed through their sequence number
- It is now possible to choose the range partitions of a range-partitioned table to be processed by an Optim High Performance Unload task through their sequence number, either by specifying the DATAPARTITION NUM control file syntactical clause, or by specifying the %{datapartition_num} template keyword in the OUTFILE clause.
- Support of the Db2 environments which authentication is configured with Kerberos
- Optim High Performance Unload can be configured for connecting to a Db2 environment set up with Kerberos authentication, by setting appropriate parameters in its configuration file.