Configuring local storage class for the caching tier

Worker nodes on OpenShift Container Platform should have local block devices, or SSDs or Names, attached before the setup is done.

Before you begin

The new multi-tier storage architecture introduced for Native Cloud Object Storage (NCOS) support requires the configuration of the caching tier, and the architecture requires the provisioning of a locally attached drive, ideally an NVMe drive, in order to overcome the different I/O characteristics of cloud object storage.

About this task

The local storage for the caching tier is usually configured with a set of locally attached fast NVMe drives to each of the nodes to be used during the deployment of Db2® Warehouse 11.5.9.

These locally attached drives are expected to only be associated with the compute node they are attached to in order to get the highest performance from them for fast persistence and data caching.

Procedure

  1. Add privileged SCC of the OpenShift cluster to the openebs Service Account:
    oc adm policy add-scc-to-user privileged system:serviceaccount:openebs:openebs-maya-operator
  2. Deploy the openebs liteoperator using the following:
    oc apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml 
    oc apply -f https://openebs.github.io/charts/openebs-lite-sc.yaml
  3. Deploy the storage class using the spec, Electrocardiograms, which needs to use the NVME node group:
    cat << EOF | kubectl apply -f -
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: local-device
      annotations:
        openebs.io/cas-type: local
        cas.openebs.io/config: |
          - name: StorageType
            value: device
          - name: FSType
            value: ext4
          - name: BlockDeviceSelectors
            data:
               ndm.io/blockdevice-type: "blockdevice"
    provisioner: openebs.io/local
    reclaimPolicy: Delete
    volumeBindingMode: WaitForFirstConsumer
    EOF