Configuring the Ceph® storage class

For Telco Network Cloud Manager - Performance, you can use a preexisting storage class that is installed along with OpenShift® Container Platform or create your own. During the installation of Telco Network Cloud Manager - Performance, you must specify the storage classes for components that require persistence.

About this task

Telco Network Cloud Manager - Performance is tested with Ceph Storage Class. Use this information to set up Ceph Storage Class by using the Ansible Playbook. For more information, see https://github.com/IBM/community-automation/tree/master/ansible/csi-cephfs-fyre-play.

Procedure

  1. Run the following command to see the available storage classes:
    oc get sc

    Alternatively, you can go to storage classes in the left navigation in Red Hat OpenShift web console to see what storage classes are available in your cluster.

  2. Generate SSH key for IBM® GitHub to clone repository.
  3. Install git repository.
    yum install git
    
  4. Install epel-release and ansible.
    dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y
    dnf install ansible -y
    
    If you are using RHEL V9 or above, install epel-release V9 by using the following commands:
    dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm -y
    dnf install ansible -y
    
  5. Clone the repository in a folder.
    For example, ceph
    git clone https://github.com/IBM/community-automation.git
    git clone https://github.com/rook/rook.git
    
    You can find the following content:
    ansible.cfg  csi-cephfs.yml  examples  Jenkinsfile  README.md  roles

Set up Ceph.

  1. Copy examples/inventory content to a higher-level directory.
    cd community-automation/ansible/csi-cephfs-fyre-play/
    cp examples/inventory .
    You can find the following content:
    ansible.cfg  csi-cephfs.yml  examples  inventory  Jenkinsfile  README.md  roles
  2. Update inventory to modify the IP address and root password of the infra node.
    cat inventory
    <myserver.ibm.com>  ansible_connection=ssh  ansible_ssh_user=root  ansible_ssh_pass=<password> 
    ansible_ssh_common_args='-o StrictHostKeyChecking=no'
  3. Install Ceph by using the Ansible Playbook.
    ansible-playbook -i inventory csi-cephfs.yml
  4. Check that the Ceph Storage Class is installed.
    oc get sc
    NAME                   PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    csi-cephfs (default)   rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   6m10s
    rook-ceph-block        rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   6m10s
    rook-cephfs            rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   6m11s
    
    Note: The csi-cephfs storage class is used by Telco Network Cloud Manager - Performance and the rook-cephfs storage class is used by Watson AIOps.
    oc get pods -n rook-ceph
    NAME                                                              READY   STATUS      RESTARTS   AGE
    csi-cephfsplugin-5fdxp                                            3/3     Running     0          5m33s
    csi-cephfsplugin-bk7x5                                            3/3     Running     0          5m33s
    csi-cephfsplugin-provisioner-5c65b94c8d-p7hgf                     6/6     Running     0          5m32s
    csi-cephfsplugin-provisioner-5c65b94c8d-zrs7r                     6/6     Running     0          5m32s
    csi-cephfsplugin-qsttw                                            3/3     Running     0          5m33s
    csi-rbdplugin-97ftx                                               3/3     Running     0          5m34s
    csi-rbdplugin-fmhqg                                               3/3     Running     0          5m34s
    csi-rbdplugin-provisioner-569c75558-594jd                         6/6     Running     0          5m33s
    csi-rbdplugin-provisioner-569c75558-bcbrb                         6/6     Running     0          5m33s
    csi-rbdplugin-v7tng                                               3/3     Running     0          5m34s
    rook-ceph-crashcollector-worker0.tncpqacluster2.cp.fyre.ibm75jv   1/1     Running     0          4m12s
    rook-ceph-crashcollector-worker1.tncpqacluster2.cp.fyre.ib2tgrv   1/1     Running     0          4m44s
    rook-ceph-crashcollector-worker2.tncpqacluster2.cp.fyre.ibwzmqv   1/1     Running     0          3m17s
    rook-ceph-mds-myfs-a-6d68d4b46c-sm44x                             1/1     Running     0          3m18s
    rook-ceph-mds-myfs-b-7485957c69-8nzlv                             1/1     Running     0          3m17s
    rook-ceph-mgr-a-7d94f86f47-dxpsc                                  1/1     Running     0          3m42s
    rook-ceph-mon-a-d995b4677-htmsr                                   1/1     Running     0          4m54s
    rook-ceph-mon-b-fb7d5c6f4-c2qbh                                   1/1     Running     0          4m45s
    rook-ceph-mon-c-646b8b4d79-8n9vc                                  1/1     Running     0          4m12s
    rook-ceph-operator-59cbfb7c7c-qg6t2                               1/1     Running     0          6m35s
    rook-ceph-osd-0-7547b5ddd6-56wf9                                  1/1     Running     0          3m32s
    rook-ceph-osd-1-56546d7db7-hlvt9                                  1/1     Running     0          3m31s
    rook-ceph-osd-2-6ccc64d59b-hnbzr                                  1/1     Running     0          3m30s
    rook-ceph-osd-prepare-worker0.tncpqacluster2.cp.fyre.ibm.c5f898   0/1     Completed   0          3m41s
    rook-ceph-osd-prepare-worker1.tncpqacluster2.cp.fyre.ibm.csxzlg   0/1     Completed   0          3m41s
    rook-ceph-osd-prepare-worker2.tncpqacluster2.cp.fyre.ibm.cvsvc2   0/1     Completed   0          3m40s
    rook-discover-2ntqb                                               1/1     Running     0          6m12s
    rook-discover-9v4jk                                               1/1     Running     0          6m12s
    rook-discover-k4vdw                                               1/1     Running     0          6m12s