Sample configuration for running a stateful container

You can use IBM Storage Enabler for Containers for running stateful containers with a storage volume provisioned from an external IBM block storage system.

About this task

This example illustrates a basic configuration required for running a stateful container with volume provisioned on a Spectrum Connect storage service.
  • Creating a storage class gold that is linked to Spectrum Connect storage service gold with XFS file system.
  • Creating a PersistentVolumeClaim (PVC) pvc1 that uses the storage class gold.
  • Creating a pod pod1 with container container1 that uses PVC pvc1.
  • Starting I/Os into /data/myDATA in pod1\container1.
  • Deleting the pod1 and then creating a new pod1 with the same PVC. Verifying that the file /data/myDATA still exists.
  • Deleting all storage elements (pod, PVC, persistent volume and storage class).

Procedure

  1. Open a command-line terminal.
  2. Create a storage class, as shown below. The storage class gold is linked to a Spectrum Connect storage service on a pool from IBM FlashSystem A9000R with QoS capability and XFS file system. As a result, any volume with this storage class will be provisioned on the gold service and initialized with XFS file system.
    $> cat storage_class_gold.yml
    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: "gold"                 # Storage Class name
      annotations:
       storageclass.beta.kubernetes.io/is-default-class: "true" 
    provisioner: "ubiquity/flex"   
    parameters:
      profile: "gold"              
      fstype: "xfs"                
      backend: "scbe"              
    
    $> kubectl create -f storage_class_gold.yml
    storageclass "gold" created
  3. Display the newly created storage class to verify its successful creation.
    $> kubectl get storageclass gold
    NAME             TYPE
    gold (default)   ubiquity/flex
  4. Create a PVC pvc1 with the size of 1 Gb that uses the storage class gold.
    $> cat pvc1.yml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: "pvc1"    
    spec:
      storageClassName: gold
      accessModes:
        - ReadWriteOnce 
      resources:
        requests:
          storage: 1Gi  
    
    $> kubectl create -f pvc1.yml
    persistentvolumeclaim "pvc1 created
    The IBM Storage Enabler for Containers creates a persistent volume (PV) and binds it to the PVC. The PV name will be PVC-ID. The volume name on the storage will be u_[ubiquity-instance]_[PVC-ID]. Keep in mind that the [ubiquity-instance] value is set in the IBM Storage Enabler for Containers configuration file.
  5. Display the existing PVC and persistent volume.
    $> kubectl get pvc
    NAME   STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
    pvc1   Bound     pvc-254e4b5e-805d-11e7-a42b-005056a46c49   1Gi        RWO           1m
    
    $> kubectl get pv
    NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM        REASON   AGE 
    pvc-254e4b5e-805d-11e7-a42b-005056a46c49   1Gi        RWO           Delete          Bound     default/pvc1
  6. Display the additional persistent volume information, such as its WWN, location on a storage system, etc.
    $> kubectl get -o json pv pvc-254e4b5e-805d-11e7-a42b-005056a46c49 | grep -A15 flexVolume
            "flexVolume": {
                "driver": "ibm/ubiquity",
                "options": {
                    "LogicalCapacity": "1000000000",
                    "Name": "u_PROD_pvc-254e4b5e-805d-11e7-a42b-005056a46c49",
                    "PhysicalCapacity": "1023410176",
                    "PoolName": "gold-pool",
                    "Profile": "gold",
                    "StorageName": "A9000 system1",
                    "StorageType": "2810XIV",
                    "UsedCapacity": "0",
                    "Wwn": "36001738CFC9035EB0CCCCC5",
                    "fstype": "xfs",
                    "volumeName": "pvc-254e4b5e-805d-11e7-a42b-005056a46c49"
                }
            },
  7. Create a pod pod1 with a persistent volume vol1.
    $> cat pod1.yml
    kind: Pod
    apiVersion: v1
    metadata:
      name: pod1          
    spec:
      containers:
      - name: container1 
        image: alpine:latest
        command: [ "/bin/sh", "-c", "--" ]  
        args: [ "while true; do sleep 30; done;" ]
        volumeMounts:
          - name: vol1
            mountPath: "/data" 
      restartPolicy: "Never"
      volumes:
        - name: vol1
          persistentVolumeClaim:
            claimName: pvc1
    
    $> kubectl create -f pod1.yml
    pod "pod1" created
    As a result, the IBM Storage Kubernetes FlexVolume performs the following:
    • Attaches the volume to the host.
    • Rescans and discover the multipath device of the new volume.
    • Creates XFS or EXT4 file system on the device (if file system does not exist on the volume).
    • Mounts the new multipath device on /ubiquity/[WWN of the volume].
    • Creates a symbolic link from /var/lib/kubelet/pods/[pod ID]/volumes/ibm~ubiquity-k8s-flex/[PVC ID] to /ubiquity/[WWN of the volume].
  8. Display the newly created pod1 and write data to its persistent volume. Make sure that the pod status is Running.
    $> kubectl get pod pod1
    NAME      READY     STATUS    RESTARTS   AGE
    pod1      1/1       Running   0          16m
    
    
    $> kubectl exec pod1 -c container1  -- bash -c "df -h /data"
    Filesystem          Size  Used Avail Use% Mounted on
    /dev/mapper/mpathi  951M   33M  919M   4% /data
    
    $> kubectl exec pod1 -c container1  -- bash -c "mount | grep /data"
    /dev/mapper/mpathi on /data type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    
    $> kubectl exec pod1 touch /data/FILE
    $> kubectl exec pod1 ls /data/FILE
    File
    
    $> kubectl describe pod pod1| grep "^Node:" 
    Node:		k8s-node1/hostname
  9. Log in to the worker node that has the running pod and display the newly attached volume on the node.
    > multipath -ll
    mpathi (36001738cfc9035eb0ccccc5) dm-12 IBM     ,2810XIV
    size=954M features='1 queue_if_no_path' hwhandler='0' wp=rw
    `-+- policy='service-time 0' prio=1 status=active
      |- 3:0:0:1 sdb 8:16 active ready running
      `- 4:0:0:1 sdc 8:32 active ready running
    
    $> df | egrep "ubiquity|^Filesystem"
    Filesystem                       1K-blocks    Used Available Use% Mounted on
    /dev/mapper/mpathi                  973148   32928    940220   4% /ubiquity/6001738CFC9035EB0CCCCC5
    
    $> mount |grep ubiquity
    /dev/mapper/mpathi on /ubiquity/6001738CFC9035EB0CCCCC5 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    
    $> ls -l /var/lib/kubelet/pods/*/volumes/ibm~ubiquity-k8s-flex/*
    lrwxrwxrwx. 1 root root 42 Aug 13 22:41 pvc-254e4b5e-805d-11e7-a42b-005056a46c49 -> /ubiquity/6001738CFC9035EB0CCCCC5
  10. Delete the pod.
    $> kubectl delete pod pod1
    pod "pod1" deleted
    As a result, the IBM Storage Kubernetes FlexVolume performs the following:
    • Removes symbolic link from /var/lib/kubelet/pods/[pod ID]/volumes/ibm~ubiquity-k8s-flex/[PVC ID] to /ubiquity/[WWN of the volume].
    • Unmounts the new multipath device on /ubiquity/[WWN of the volume].
    • Removes the multipath device of the volume.
    • Detaches (unmap) the volume from the host.
    • Rescans in cleanup mode to remove the physical device files of the detached volume.
  11. Remove the PVC and its PV (volume on the storage system).
    $> kubectl delete -f pvc1.yml
    persistentvolumeclaim "pvc1" deleted
  12. Remove the storage class. This command removes the Kubernetes storage class only, the Spectrum Connect storage service remains intact.
    $> kubectl delete -f storage_class_gold.yml
    storageclass "gold" deleted