Verifying the IBM Storage Scale container native cluster

Verify whether the deployment of the IBM Storage Scale container native cluster is done correctly.

Complete the following steps:

For more information, see Debugging IBM Storage Scale deployment.

  1. Verify that the Operator has created the cluster by checking the pods.

    kubectl get pods -n ibm-spectrum-scale
    

    A sample output is shown:

    # kubectl get pods -n ibm-spectrum-scale
    NAME                               READY   STATUS    RESTARTS   AGE
    ibm-spectrum-scale-gui-0           4/4     Running   0          5m45s
    ibm-spectrum-scale-gui-1           4/4     Running   0          2m9s
    ibm-spectrum-scale-pmcollector-0   2/2     Running   0          5m15s
    ibm-spectrum-scale-pmcollector-1   2/2     Running   0          4m11s
    worker0                            2/2     Running   0          5m43s
    worker1                            2/2     Running   0          5m43s
    worker3                            2/2     Running   0          5m45s
    

    The following list includes considerations about the IBM Storage Scale cluster creation and its pods:

    • The cluster takes some time to create.
    • One core pod per node gets created on nodes matching the nodeSelector.
    • Core pods can take several minutes to move to Running status.
    • GUI pods do not achieve the Running status until all the core pods are in a Running status.
    • Two GUI pods are created, where the second is created after the first is moved to Running status.
    • Two pmcollector pods are created, where the second is created after the first is moved to Running status.
    • Resulting cluster should have one core pod per node as specified by the nodeSelector, two GUI pods, and two pmcollector pods.
  2. Verify that the IBM Storage Scale cluster is created correctly:

    a) Enter the mmlscluster command:

       kubectl exec $(kubectl get pods -lapp.kubernetes.io/name=core \
       -ojsonpath="{.items[0].metadata.name}" -n ibm-spectrum-scale)  \
       -c gpfs -n ibm-spectrum-scale -- mmlscluster
    

    The output from the command should show that an IBM Storage Scale cluster has been created, and all nodes as specified by the nodeSelector are present.

       GPFS cluster information
       ========================
       GPFS cluster name:         ibm-spectrum-scale.mycluster.example.com
       GPFS cluster id:           835278197609441888
       GPFS UID domain:           ibm-spectrum-scale.mycluster.example.com
       Remote shell command:      /usr/bin/ssh
       Remote file copy command:  /usr/bin/scp
       Repository type:           CCR
    
       Node   Daemon node name                                              IP address    Admin node name                                              Designation
       ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
          1   worker2.daemon.ibm-spectrum-scale.stg.mycluster.example.com.  172.29.0.145  worker2.admin.ibm-spectrum-scale.stg.mycluster.example.com.  quorum-manager-perfmon
          2   worker1.daemon.ibm-spectrum-scale.stg.mycluster.example.com.  172.29.0.146  worker1.admin.ibm-spectrum-scale.stg.mycluster.example.com.  quorum-manager-perfmon
          3   worker3.daemon.ibm-spectrum-scale.stg.mycluster.example.com.  172.29.0.148  worker3.admin.ibm-spectrum-scale.stg.mycluster.example.com.  quorum-manager-perfmon
    

    b) Enter the mmgetstate command:

       kubectl exec $(kubectl get pods -lapp.kubernetes.io/name=core \
       -ojsonpath="{.items[0].metadata.name}" -n ibm-spectrum-scale)  \
       -c gpfs -n ibm-spectrum-scale -- mmgetstate -a
    

    The output from the command should show that the GPFS state for all nodes are listed as active.

       Node number  Node name        GPFS state
       -------------------------------------------
             1      worker0          active
             2      worker1          active
             3      worker3          active
    
  3. Verify that the Remote Cluster authentication is successfully created.

    a. Get a list of the remote clusters.

       kubectl get remotecluster.scale -n ibm-spectrum-scale
    

    b. Inspect the remote clusters and ensure that the value for READY is True.

    Example:

       # kubectl get remotecluster.scale -n ibm-spectrum-scale
       NAME                   READY   AGE
       remotecluster-sample   True    30h
    
  4. Verify that the storage cluster file system is configured:

    a. Get a list of the file systems:

       kubectl get filesystem.scale -n ibm-spectrum-scale
    

    b. Inspect the file systems and ensure that the value for ESTABLISHED is True.

       $ kubectl get filesystem.scale -n ibm-spectrum-scale
       NAME            ESTABLISHED   AGE
       remote-sample   True          30h
    
  5. Manually verify that the file system is mounted using the mmlsmount command.

    kubectl exec $(kubectl get pods -lapp.kubernetes.io/name=core \
    -ojsonpath="{.items[0].metadata.name}" -n ibm-spectrum-scale)  \
    -c gpfs -n ibm-spectrum-scale -- mmlsmount remote-sample -L
    

    Example output:

    File system remote-sample (gpfs1.local:fs1) is mounted on ...
    ...
    172.29.0.148    worker3.daemon.ibm-spectrum-scale.stg.mycluster.example.com. ibm-spectrum-scale.mycluster.example.com
    172.29.0.146    worker1.daemon.ibm-spectrum-scale.stg.mycluster.example.com. ibm-spectrum-scale.mycluster.example.com
    172.29.0.145    worker2.daemon.ibm-spectrum-scale.stg.mycluster.example.com. ibm-spectrum-scale.mycluster.example.com
    
  6. Verify that there are no problems reported in the operator status and events. For more information, see Status and events.

  7. Verify that the CSI pods are up and running.

    kubectl get pods -n ibm-spectrum-scale-csi
    
  8. Verify that the Core DNS pods are up and running. There will be at least one Core DNS pod per core pod.

    kubectl get pods -n ibm-spectrum-scale-dns