IBM Lift
Migrate your database from on-premises data centers to the IBM Cloud — quickly, securely and reliabl
Download now
Woman in the office
What is IBM Lift?

IBM Lift makes it easier to quickly, securely and reliably migrate your database from on-premises data centers to an IBM Cloud® data property. It is designed to enable secure and rapid migration to the cloud with zero downtime.

How customers use it

Database migration

Take your entire database to the IBM Cloud. It's a two-step process: convert your schema and migrate your data. To convert your schema, start by downloading the IBM Database Conversion Workbench. The workbench will walk you through the process of converting your source database DDL so that it is compatible with the target. The workbench will also produce a report that tells you where your action is required. Once your schema is in place, you'll use the Lift CLI to migrate your data.

Incremental data load

Start by generating a set of CSV files that represent your incremental changes, per database table. Use the Lift CLI to scoop up those delimited files, push them over the wire, and import the files into IBM® Db2® Warehouse on Cloud. Throw these steps in a script, set up a cron job, and you've got an ongoing incremental update of your data warehouse.

Database consolidation

You can use the Lift CLI to migrate data from multiple different databases or data sources into a single IBM Db2 Warehouse on Cloud MPP cluster. Lift provides you with the flexibility to take tables from multiple data sources and import them under a single schema in IBM Db2 Warehouse on Cloud so that you can decommission your existing database cluster.

Data warehousing

Your customers don't care that you need to run analytics on their buying behavior. They just want a snappy user experience. Spin up a cloud data warehouse, such as IBM Db2 Warehouse on Cloud, to run analytics on data from your transactional data store. Keep your reports and dashboards up to date by sending small amounts of data from the source, and always have an up-to-date view of your business.

IBM Lift features
High-speed data movement

Lift uses IBM Aspera® under the covers to move your data to the cloud at blazing fast speeds. Aspera's patented transport technology leverages existing WAN infrastructure and commodity hardware to achieve speeds that are hundreds of times faster than FTP and HTTP.

Quick recovery from common problems

Automatically recovers from common problems you may hit during the migration. For example, if your file upload is interrupted mid-transfer, Lift will resume where you last left off. File uploads are stable and robust, even over the most bandwidth-constrained networks.

Encryption for data in motion

Nobody wants to end up on the front page of the news. Any data moved over the wire to the IBM Cloud is secured via a 256-bit encrypted connection.

Control of each migration step

Every data migration is split into three steps: extract from source, transport over the wire, and load into target. Our CLI gives you the flexibility to perform these three steps separately so that your data migration works around your schedule, not the other way around.

Built for the cloud

You'll install the Lift CLI only once on your on-premises machine. Under the covers, the CLI works with the Lift Core Services running in the IBM Cloud to help get your data to your Watson Data Platform persistent store. Like any other cloud app, Lift never requires an update. New features are instantly available to you without you having to lift a finger.


We want you to try our cloud data services. Cost shouldn't be an issue.

Community help IBM support

Get help from IBM experts

Related products
SQL databases IBM Db2 on Cloud

A fully-managed SQL cloud database. Easily deploy and scale on demand.

IBM Integrated Analytics System

Do data science faster with IBM Integrated Analytics System. An optimized and cloud-ready data platform that connects data scientists with data.


Get answers to the most commonly asked questions about this product

If you're migrating data from your IBM PureData System for Analytics (Netezza) database, you first need to locally extract a database table to a CSV file using “lift extract.” Then, you will transfer your CSV data file to the IBM Db2 Warehouse on Cloud landing zone using “lift put.” The IBM Db2 Warehouse on Cloud landing zone is a pre-allocated volume used for data loading and scratch. Finally, you will load the uploaded CSV data file into the engine using “lift load.” Once the load is complete, you can delete the data file using “lift rm.”

If you're migrating a set of CSV files, you'll follow a similar set of steps to above. You'll start by transferring your CSV data files to the Db2 Warehouse on Cloud landing zone using “lift put.” The Db2 Warehouse on Cloud landing zone is a pre-allocated volume used for data loading and scratch. Finally, you will load the uploaded CSV data file into the engine using “lift load.” Once the load is complete, you can delete the data file using “lift rm.”

No, you can migrate any size database. But, keep in mind that the duration of your database migration depends on your network connection speed, the volume of uncompressed data that you need to move, and the hardware profiles of your source and target computers. In other words, mileage may vary.

The Lift CLI migrates your tables or CSV files to an IBM Cloud data target. If you need to migrate other database artifacts, such as tables, views, stored procedures, please use the IBM Database Conversion Workbench.

We recommend that you install and run the CLI from a machine that is network-close (minimal latency) to your database source. This will ensure that your data is extracted and staged faster in your on-premises environment, thus improves your overall end-to-end data migration time.

The following ports must be opened on the machine running the Lift CLI:

*There will be incoming returned traffic when the OUTBOUND connection has been initiated by the Lift CLI toward Db2 Warehouse on Cloud cluster on port 33001. The local port, which will be one in the ephemeral port range, will be randomly chosen by the operating system. All modern firewalls are stateful (or connection-aware or state-aware), and it is expected that there will be no need to open any INBOUND port. 

Purpose      ProtocolDirection Destination                    Port
Aspera Transfer         TCP                    OUTBOUND                    INTERNET                   33001*
Aspera Transfer           UDP                    OUTBOUNDINTERNET                  33001*
Db2 Warehouse on Cloud SSL-secured JDBC         TCP                     OUTBOUNDINTERNET                  50001
DB2 Warehouse on Cloud REST Load API             TCP                    OUTBOUNDINTERNET                  8443
Lift Core Services         TCP                    OUTBOUNDINTERNET



For Linux and MacOS, the minimum storage should be greater than or equal to the on-disk representation of your largest table (uncompressed).

For Windows, the minimum storage should be 2X greater than or equal to the on-disk representation of your largest table (uncompressed).

You can run “lift df” to check for available disk space on the Db2 Warehouse on Cloud for Analytics landing zone. You can free up space by running “lift rm.” If you still don't have enough space, you can split your table into multiple file chunks and upload those individually using the “lift extract –size” option.

Sure. You can use the “lift put --max-throughput” option to limit the throughput utilized by the data transfer.

Yes. You can set connection credentials as environment variables. You can also create a properties file, and place your database credentials and common options there. Take a look at “lift help <command“ to see a list of options that the Lift CLI supports.

IBM Lift CLI may be used to process Protected Health Information regulated under HIPAA if Client, as the data controller, determines that the technical and organizational security measures are appropriate to the risks presented by the processing and the nature of the data to be protected. IBM Lift CLI is not designed to process data to which additional regulatory requirements apply.

An environment configuration file called lift.environment located in the Lift CLI installation bin directory ( <Lift CLI install dir>/bin ) may be created with the following contents to add an HTTP proxy configuration:

For the proxy host, use<hostname>. For the proxy port, use proxy.port=<port number>. Both must be specified for the settings to take effect. If the proxy requires authentication, the Lift CLI uses basic authentication in the CONNECT request by providing proxy.user=<user> and proxy.password=<password> . Both must be specified for the authentication settings to take effect.

Example of <Lift CLI install dir>/bin/lift.environment contents (each property is on a new line):




An environment properties file called lift.environment located in the Lift CLI installation bin directory ( <Lift CLI install dir>/bin ) may be created with the following contents to add an X509 certificate to be imported to the trust store.

For the proxy host, use proxy.certificate.path=<fully qualified path to the X509 certificate file> .

The certificate is added with alias ibm-lift-imported-proxy-cert. If the alias already exists, the file is not imported and must be manually removed prior to running the CLI again. You can remove the certificate using the Java keytool on the Lift CLI Java trust store in <Lift CLI install dir>/jre/lib/security/cacerts (i.e., keytool -delete -alias ibm-lift-imported-proxy-cert -keystore <Lift CLI install dir>/jre/lib/security/cacerts -storepass changeit).

Yes, the Lift CLI can be installed on the PureData System for Analytics, but additional storage must be attached to provide the extracted data sufficient staging disk space.

The following tech notes provide steps to mount SAN/NFS systems on PureData Systems for Analytics. They also include best practices for attaching and configuring additional storage for PureData System for Analytics.

1. Adding SAN Storage to PureData Systems for Analytics:

2. IBM PureData System for Analytics Mounting NFS on the appliance:

3. Mounting NFS filesystem on PureData for Analytics systems:

When you are installing the Lift CLI for PureData System for Analytics sources, install the Lift CLI on your "injection" system (the system that you use to stage data to load into the PureData System for Analytics database). That system will have good connectivity to the PureData System for Analytics and will have lots of disk space for staging data. However, if your injection system is already fully loaded, then install the Lift CLI on a similar system that is similarly connected and with significant staging disk space.

We strongly recommend that you install the Lift CLI on a Linux machine. When your Lift CLI is installed on a Linux machine, data extraction from PureData System for Analytics sources is done with high speed unload facilities. Install your Lift CLI on Linux for significantly better overall throughput when your source is PureData System for Analytics.

We strongly recommend that you install the Lift CLI on a Linux machine with Db2 client installation. Installing Lift CLI on your Linux machine with Db2 client significantly improves overall throughput. When Lift CLI fails to detect Db2 client, different extract strategy is used and you may notice reduced extract throughput. A message on console "Lift is extracting data at sub-light speeds. You can improve extraction time by installing and configuring the Db2 client. For more information, visit” (link resides outside IBM) is printed when Db2 client is not available on your machine where Lift CLI is installed.

Prerequisites for Lift CLI to use Db2 client:

1. db2 (for Linux) or db2cmd (for Windows) command has to be available in PATH.

  • Linux: <INSTANCE_OWNER_HOME>/sqllib/db2profile has to be applied to environment before executing Lift CLI
  • Windows: The Db2 client has to be set as default instance

2. The OS user has to be included in SYSADM group of the Db2 client instance if Lift CLI is run remotely from IBM Db2 for Linux UNIX and Windows server.

3. Version of the Db2 client has to be same or higher than version of IBM Db2 for Linux UNIX and Windows server.

Yes. Extracting hidden columns is supported by Lift CLI. By default, hidden columns are not included in the extracted CSV file. If you want to include hidden columns in the extracted data, explicitly specify all column names along with hidden column names by using the column selection option. Refer to “lift extract –help” for more information on column selection option.

The following are prerequisites for Lift CLI to use the Oracle client:

1. Install the basic and tools modules of the Oracle client.

2. Ensure the exp program path is added to the PATH environment variable.

3. Depending on your environment, you might need to add the Oracle client library path to the operating system library path (for example, LD_LIBRARY_PATH).

4. The Oracle client version must be the same or later than the Oracle server version.

Note: You do not need to pre-configure the source database connection from the Oracle client.

Yes. Lift CLI uses a UTF-8 code page and supports data conversion from commonly used Oracle character sets.

The following is a list of supported Oracle Database Character Sets (NLS_CHARACTERSET):


You can determine the character set of the source Oracle database by using the following SQL query:


Lift CLI uses a degraded mode if the source table has any of the following data types:


Each table is evaluated separately. This degraded mode impacts only the current table extraction operation due to the presence of the affected data types.

No. Lift CLI doesn't need any other additional setup of tools for data migration.

Get started with IBM Lift

Get started with data migration in minutes.

Download now