Specifying resources for persistent storage for a Db2 Big SQL instance

After you specify the resources that you want to allocate to the head and worker nodes, specify the resources that you want to use for persistent storage.

About this task

The storage requirements are for storing Db2® Big SQL metadata, such as the Db2 Big SQL catalog.

NFS is the recommended storage.

Db2 Big SQL uses block storage on most storage environments.

Storage Notes Storage classes
OpenShift® Data Foundation   Block storage: ocs-storagecluster-ceph-rbd
IBM® Storage Fusion Data Foundation   Block storage: ocs-storagecluster-ceph-rbd
IBM Storage Fusion Global Data Platform   Block storage:

Either of the following storage classes:

  • ibm-spectrum-scale-sc
  • ibm-storage-fusion-cp-sc
IBM Storage Scale Container Native   Block storage: ibm-spectrum-scale-sc
Portworx   Block storage: portworx-db2-rwx-sc
NFS   Block storage: managed-nfs-storage
Amazon Elastic storage File storage is provided by Amazon Elastic File System.

Block storage, provided by Amazon Elastic Block Store, is not supported.

File storage: efs-nfs-client
NetApp Trident   Block storage: ontap-nas

The amount of storage that you specify for the head node is also applied to the worker nodes. In addition, Db2 Big SQL allocates 10 GB of storage, using the same storage class, for its components, and 30 GB per pod for audit log storage.

Procedure

Under Node storage, specify the storage class that you want to use for the head and worker nodes, and the amount of storage to allocate from the storage class.

What to do next

Setting up a connection from Db2 Big SQL to a remote data source