Remote storage requirements
Remote storage aliases are supported by a select number of Db2® commands for accessing data on the IBM® Cloud Object Storage, the Amazon Simple Storage Service (S3), or other object storage providers using the S3 protocol.
Supported platforms and prerequisites
- SUSE Linux Enterprise Server
- Linux 64-bit, x86-64: 12.4 and later, 15.1 and later.
- Red Hat Enterprise Linux
- Linux 64-bit, x86-64: 7.9, 8.1 and later, 9.2 and later.
- Ubuntu
- Linux 64-bit, x86-64: 20.04, 22.04.
- libcURL, version 7.29.0 or later.
- libxml2, version 2.9.1 or later.
- Unzip version 6.0 or later. Required for decompression of .zip input files. For more information, see Compressed input data.
Supported remote storage providers
The following remote storage providers are supported:
- The IBM Cloud® Object Storage and Amazon S3 providers are supported for all commands that are enabled for remote storage.
- Other object storage providers that can be access using the S3 protocol are supported for all commands that are enabled for remote storage.
- The Microsoft Azure object storage provider is only supported for the CREATE EXTERNAL TABLE command, including the ability to query the data.
Limitations
In all configurations, the following limitations exist for Amazon S3:
- AWS Key Management Services (KMS) are not supported.
- AWS role-based (IAM) or token-based (STS) credentials are not supported.
When the DB2_ENABLE_COS_SDK registry variable is set to OFF, remote storage access uses the legacy libcurl method. This method has numerous additional limitations for Amazon S3:
- The Object Lock feature is not supported.
- Encrypted buckets are not supported.
You are encouraged to use secure endpoints in all configurations as the data sent to insecure endpoints is not encrypted. Secure endpoints are secured with a SSL certificate chain that can be validated. If this is not feasible, the endpoint can be used in insecure mode.
Local staging path
- Downloading an object from a remote storage server
- Uploading an object from a local file system to a remote storage server
- For BACKUP operations, each backup session to the remote storage has a maximum size of 5 GB, which can produce a total database backup image size of 5 TB.
- For LOAD COPY operations, each LOAD COPY is restricted to a maximum size of 5 GB.
- The local staging space is not needed for BACKUP and LOAD COPY operations. The maximum size of each session is determined by multiplying the value of the database manager configuration parameter MULTIPARTSIZEMB by the maximum number of parts that are allowed by the Cloud Object Storage provider.
- The local staging path is needed for RESTORE operations. This location is needed to temporarily store the downloaded backup image.
- The local staging path is needed for log archive and retrieve operations. This location is used to temporarily store the log files being uploaded or downloaded.
Compressed input data
Users are able to load directly from compressed input data files stored in the supported remote storage.
- *.gz - created by gzip utility
- *.zip - created by zip utility
- *.gz - created by gzip utility
It is required that the compressed file name has the same name as the original file, with the additional .zip or .gz file extension. For example, if a file has the name db2load.txt, it is expected that the compressed file name is db2load.txt.zip or db2load.txt.gz.