AWS S3, MinIO, and IBM Cloud Object Storage connectors
Starting from version 18.104.22.168, you can use a cloud connector (S3) to back up and restore database backups on cloud storage.
Starting from version 22.214.171.124, S3 Glacier and S3 Glacier Deep Archive are supported.
Starting from version 126.96.36.199, MinIO Object Storage is supported.
Starting from version 188.8.131.52, S3 Compatible Object Storage is supported.
- AWS S3 (Amazon Web Service - Simple Storage Service)
- IBM Cloud Object Storage
- MinIO Object Storage
- S3 Compatible Object Storage
The S3 connector does not need any client software installation.
- For the AWS S3 cloud storage, use the value s3 or aws with this argument. For example, -connector s3.
- For IBM Cloud Object Storage, use the value cos or ibmcos with this argument. For example, -connector cos.
- For MinIO, use the value minio with this argument. For example, -connector minio.
- For connecting to cloud storage, provide the following parameters in
<key>=<value>pairs, which are separated by
|UNIQUE_ID||Mandatory||This is a namespace that is used by customers to group data in the cloud bucket.|
|ACCESS_KEY_ID||Mandatory||Key that is generated on AWS or MinIO or IBM Cloud Object Storage.|
|SECRET_ACCESS_KEY||Mandatory||Secret access key that is generated on AWS or MinIO or IBM Cloud Object Storage.|
|DEFAULT_REGION||Mandatory||Region of the bucket.|
|BUCKET_URL||Mandatory||Name of the bucket.|
||This is a well-known region URL to access your bucket. It is optional for AWS and mandatory for MinIO and IBM Cloud Object Storage.|
|STORAGE_CLASS||Optional||Storage class for backup valid values: STANDARD, GLACIER, DEEP_ARCHIVE Default set to STANDARD Applicable in Backup.|
|TIER||Optional||Restoration speed from GLACIER/DEEP_ARCHIVE to S3 Valid Values: Standard, Bulk, Expedited Default set to Standard Applicable in Restore https://docs.aws.amazon.com/AmazonS3/latest/dev/restoring-objects.html.|
Retention Period (in number of days) for data restored from Glacier to S3 Default set to
AWS provides a facility to upload objects in multiple parts. The nzbackup command uses this functions to split and upload data in parts of size that is specified by the MULTIPART_SIZE_MB parameter. The field can be configured because it can affect cost and performance. For more, see https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html#mpuploadpricing.
AWS mandates a limit of 10000 parts per object. That means a backup file can be uploaded in
maximum 10000 parts. The default registry setting (
in NPS limits backup file
size at 1 TB. Hence the default value of MULTIPART_SIZE_MB is set at
105 so that 1 TB file can be uploaded in 10000 parts. If you changed
bnrFileSizeLimitGBsetting in registry, you must set
MULTIPART_SIZE_MB to a higher value.