General

How does Lift CLI migrate my data to IBM Db2 Warehouse on Cloud?

If you're migrating data from your IBM PureData System for Analytics (Netezza) database, you first need to locally extract a database table to a CSV file using “lift extract.” Then, you will transfer your CSV data file to the IBM Db2 Warehouse on Cloud landing zone using “lift put.” The IBM Db2 Warehouse on Cloud landing zone is a pre-allocated volume used for data loading and scratch. Finally, you will load the uploaded CSV data file into the engine using “lift load.” Once the load is complete, you can delete the data file using “lift rm.”

If you're migrating a set of CSV files, you'll follow a similar set of steps to above. You'll start by transferring your CSV data files to the Db2 Warehouse on Cloud landing zone using “lift put.” The Db2 Warehouse on Cloud landing zone is a pre-allocated volume used for data loading and scratch. Finally, you will load the uploaded CSV data file into the engine using “lift load.” Once the load is complete, you can delete the data file using “lift rm.”

Is there a maximum database size that I can migrate?

No, you can migrate any size database. But, keep in mind that the duration of your database migration depends on your network connection speed, the volume of uncompressed data that you need to move, and the hardware profiles of your source and target computers. In other words, mileage may vary.

What database artifacts can I migrate?

The Lift CLI migrates your tables or CSV files to an IBM Cloud data target. If you need to migrate other database artifacts, such as tables, views, stored procedures, please use the IBM Database Conversion Workbench.

Where should I install my CLI package for best performance?

We recommend that you install and run the CLI from a machine that is network-close (minimal latency) to your database source. This will ensure that your data is extracted and staged faster in your on-premises environment, thus improves your overall end-to-end data migration time.

What network ports must be open on my on-premises system?

The following ports must be opened on the machine running the Lift CLI:

Purpose Protocol Direction Destination Port
Aspera Transfer TCP OUTBOUND INTERNET 33001*
Aspera Transfer UDP OUTBOUND INTERNET 33001*
Db2 Warehouse on Cloud SSL-secured JDBC TCP OUTBOUND INTERNET 50001
DB2 Warehouse on Cloud REST Load API TCP OUTBOUND INTERNET 8443
Lift Core Services TCP OUTBOUND INTERNET 443

 

*There will be incoming returned traffic when the OUTBOUND connection has been initiated by the Lift CLI toward Db2 Warehouse on Cloud cluster on port 33001. The local port, which will be one in the ephemeral port range, will be randomly chosen by the operating system. All modern firewalls are stateful (or connection-aware or state-aware), and it is expected that there will be no need to open any INBOUND port. For Aspera transfer firewall considerations see: https://support.asperasoft.com/hc/en-us/articles/216127518-Firewall-Considerations

What are the on-premises storage requirements to extract my data?

For Linux and MacOS, the minimum storage should be greater than or equal to the on-disk representation of your largest table (uncompressed).

For Windows, the minimum storage should be 2X greater than or equal to the on-disk representation of your largest table (uncompressed).

What if there isn’t enough disk space on the Db2 Warehouse on Cloud for Analytics landing zone to fit my largest table?

You can run “lift df” to check for available disk space on the Db2 Warehouse on Cloud for Analytics landing zone. You can free up space by running “lift rm.” If you still don't have enough space, you can split your table into multiple file chunks and upload those individually using the “lift extract –size” option.

I love that Lift is so fast, but can I slow down my upload if I’m on a bandwidth-limited network?

Sure. You can use the “lift put --max-throughput” option to limit the throughput utilized by the data transfer.

I don’t want to keep typing in my database credentials and other common options. Can I set these as environment variables or store someplace to reference?

Yes. You can set connection credentials as environment variables. You can also create a properties file, and place your database credentials and common options there. Take a look at “lift help <command“ to see a list of options that the Lift CLI supports.

Can I use IBM Lift CLI for data regulated under HIPAA?

IBM Lift CLI may be used to process Protected Health Information regulated under HIPAA if Client, as the data controller, determines that the technical and organizational security measures are appropriate to the risks presented by the processing and the nature of the data to be protected. IBM Lift CLI is not designed to process data to which additional regulatory requirements apply.

How do I configure Lift to CLI to work with an HTTP proxy?

An environment configuration file called lift.environment located in the Lift CLI installation bin directory ( <Lift CLI install dir>/bin ) may be created with the following contents to add an HTTP proxy configuration:

For the proxy host, use proxy.host=<hostname>. For the proxy port, use proxy.port=<port number>. Both must be specified for the settings to take effect. If the proxy requires authentication, the Lift CLI uses basic authentication in the CONNECT request by providing proxy.user=<user> and proxy.password=<password> . Both must be specified for the authentication settings to take effect.

Example of <Lift CLI install dir>/bin/lift.environment contents (each property is on a new line):

proxy.host=myproxy.host

proxy.port=3029

proxy.user=myproxyuser

proxy.password=proxypassword

How can I add a new X509 certificate to the list of trusted certificates for the Lift CLI?

An environment properties file called lift.environment located in the Lift CLI installation bin directory ( <Lift CLI install dir>/bin ) may be created with the following contents to add an X509 certificate to be imported to the trust store.

For the proxy host, use proxy.certificate.path=<fully qualified path to the X509 certificate file> .

The certificate is added with alias ibm-lift-imported-proxy-cert. If the alias already exists, the file is not imported and must be manually removed prior to running the CLI again. You can remove the certificate using the Java keytool on the Lift CLI Java trust store in <Lift CLI install dir>/jre/lib/security/cacerts (i.e., keytool -delete -alias ibm-lift-imported-proxy-cert -keystore <Lift CLI install dir>/jre/lib/security/cacerts -storepass changeit).

PDA > DB2 Warehouse on Cloud

Can the Lift CLI be installed directly on PureData System for Analytics?

Yes, the Lift CLI can be installed on the PureData System for Analytics, but additional storage must be attached to provide the extracted data sufficient staging disk space.

What are the best practices for attaching and configuring additional storage for PureData System for Analytics?

The following tech notes provide steps to mount SAN/NFS systems on PureData Systems for Analytics. They also include best practices for attaching and configuring additional storage for PureData System for Analytics.

1. Adding SAN Storage to PureData Systems for Analytics: http://www.ibm.com/support/docview.wss?uid=swg21700900

2. IBM PureData System for Analytics Mounting NFS on the appliance: http://www.ibm.com/support/docview.wss?qid=63d511603b6e6fb3a0216ea69fc0d500&uid=swg21568933

3. Mounting NFS filesystem on PureData for Analytics systems: http://www.ibm.com/support/docview.wss?qid=a9cf4df18f89517d8338b99a8743f522&uid=swg21971589

On what part of my on-premises system should I install the Lift CLI if I’m using IBM PureData System for Analytics sources?

When you are installing the Lift CLI for PureData System for Analytics sources, install the Lift CLI on your "injection" system (the system that you use to stage data to load into the PureData System for Analytics database). That system will have good connectivity to the PureData System for Analytics and will have lots of disk space for staging data. However, if your injection system is already fully loaded, then install the Lift CLI on a similar system that is similarly connected and with significant staging disk space.

Db2 > Db2 Warehouse on Cloud, Db2 on Cloud

Where should I install the Lift CLI for the best throughput if I’m using IBM PureData System for Analytics as my source?

We strongly recommend that you install the Lift CLI on a Linux machine. When your Lift CLI is installed on a Linux machine, data extraction from PureData System for Analytics sources is done with high speed unload facilities. Install your Lift CLI on Linux for significantly better overall throughput when your source is PureData System for Analytics.

Where should I install the Lift CLI for the best throughput if I’m using IBM Db2 for Linux UNIX and Windows as my source?

We strongly recommend that you install the Lift CLI on a Linux machine with Db2 client installation. Installing Lift CLI on your Linux machine with Db2 client significantly improves overall throughput. When Lift CLI fails to detect Db2 client, different extract strategy is used and you may notice reduced extract throughput. A message on console "Lift is extracting data at sub-light speeds. You can improve extraction time by installing and configuring the Db2 client. For more information, visit http://ibm.biz/BdZcFp” is printed when Db2 client is not available on your machine where Lift CLI is installed.

What are the Db2 client requirements to extract my data from IBM Db2 for Linux UNIX and Windows with better performance?

Prerequisites for Lift CLI to use Db2 client:

1. db2 (for Linux) or db2cmd (for Windows) command has to be available in PATH.

  • Linux: <INSTANCE_OWNER_HOME>/sqllib/db2profile has to be applied to environment before executing Lift CLI
  • Windows: The Db2 client has to be set as default instance

2. The OS user has to be included in SYSADM group of the Db2 client instance if Lift CLI is run remotely from IBM Db2 for Linux UNIX and Windows server.

3. Version of the Db2 client has to be same or higher than version of IBM Db2 for Linux UNIX and Windows server.

Can I migrate tables that have hidden columns from a Db2 source database?

Yes. Extracting hidden columns is supported by Lift CLI. By default, hidden columns are not included in the extracted CSV file. If you want to include hidden columns in the extracted data, explicitly specify all column names along with hidden column names by using the column selection option. Refer to “lift extract –help” for more information on column selection option.

Oracle > Db2 Warehouse on Cloud, Db2 on Cloud

What are the Oracle client requirements to extract my data from an Oracle Server with better performance?

The following are prerequisites for Lift CLI to use the Oracle client:

1. Install the basic and tools modules of the Oracle client.

2. Ensure the exp program path is added to the PATH environment variable.

3. Depending on your environment, you might need to add the Oracle client library path to the operating system library path (for example, LD_LIBRARY_PATH).

4. The Oracle client version must be the same or later than the Oracle server version.

Note: You do not need to pre-configure the source database connection from the Oracle client.

Can I migrate a table that has international characters that include a multi-byte character set?

Yes. Lift CLI uses a UTF-8 code page and supports data conversion from commonly used Oracle character sets.

The following is a list of supported Oracle Database Character Sets (NLS_CHARACTERSET):

AL32UTF8, AR8ISO8859P6, AR8MSWIN1256, BLT8ISO8859P13, BLT8MSWIN1257, CL8ISO8859P5, CL8MSWIN1251, EE8ISO8859P2, EE8MSWIN1250, EL8ISO8859P7, EL8MSWIN1253, IW8ISO8859P8, IW8MSWIN1255, JA16EUC, JA16EUCTILDE, JA16SJIS, JA16SJISTILDE, KO16MSWIN949, NEE8ISO8859P4, TH8TISASCII, TR8MSWIN1254, VN8MSWIN1258, WE8ISO8859P15, WE8ISO8859P9, WE8MSWIN1252, ZHS16GBK, ZHT16HKSCS, ZHT16MSWIN950, ZHT32EUC, ALE16UTF16

You can determine the character set of the source Oracle database by using the following SQL query:

SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'NLS_CHARACTERSET';

The Oracle client is configured on my Lift CLI machine, but Lift CLI displays a message like “Lift is extracting data at sub-light speeds. The source table contains data types that reduce the speed at which Lift can extract data.” Why can’t I extract data at faster speeds?

Lift CLI uses a degraded mode if the source table has any of the following data types:

BINARY_FLOAT, BINARY_DOUBLE, LOB, BLOB, CLOB, BFILE, XML, ROWID, UROWID

Each table is evaluated separately. This degraded mode impacts only the current table extraction operation due to the presence of the affected data types.

MS SQL Server > Db2 Warehouse on Cloud, Db2 on Cloud

Do I need to set up Microsoft SQL Server Tools or client to use Lift CLI?

No. Lift CLI doesn't need any other additional setup of tools for data migration.

Get started with IBM Lift

Get started with data migration in minutes.