Specifying resources for persistent storage for a Db2 Big SQL instance

After you specify the resources that you want to allocate to the head and worker nodes, specify the resources that you want to use for persistent storage.

About this task

The storage requirements are for storing Db2® Big SQL metadata, such as the Db2 Big SQL catalog.

Db2 Big SQL supports different storage types. For details, see Storage requirements. NFS is the recommended storage.

Db2 Big SQL uses block storage on most storage environments.

The following storage classes are recommended for Db2 Big SQL. You must specify information about the storage you want to use when you provision the Db2 Big SQL instance.

Storage Notes Storage classes
OpenShift® Data Foundation   Block storage: ocs-storagecluster-ceph-rbd
IBM® Storage Fusion   Block storage: ibm-spectrum-scale-sc
IBM Storage Scale Container Native   Block storage: ibm-spectrum-scale-sc
Portworx   Block storage: portworx-db2-rwx-sc
NFS   Block storage: managed-nfs-storage
Amazon Elastic storage File storage is provided by Amazon Elastic File System.

Block storage, provided by IBM Cloud Block Storage, is not supported.

File storage: efs-nfs-client
IBM Cloud storage File storage is provided by IBM Cloud File Storage.

Block storage, provided by IBM Cloud Block Storage, is not supported.

File storage: ibmc-file-gold-gid or ibm-file-custom-gold-gid
NetApp Trident   Block storage: ontap-nas

The amount of storage that you specify for the head node is also applied to the worker nodes. In addition, Db2 Big SQL allocates 10 GB of storage, using the same storage class, for its components.

Procedure

Under Node storage, specify the storage class that you want to use for the head and worker nodes, and the amount of storage to allocate from the storage class.

What to do next

Setting up a connection from Db2 Big SQL to a remote data source