Setting up Red Hat OpenShift Container Storage

You can set up Red Hat® OpenShift® Container Storage (OCS) for the Red Hat OpenShift Container Platform for use in IBM® Maximo® Application Suite.

Tip: This task maps to the following Ansible role: ocs. For more information, see IBM Maximo Application Suite installation with Ansible collection.

Procedure

  1. Verify that Red Hat OpenShift is installed correctly.
    1. Log in to the Red Hat OpenShift web console.
    2. Go to Compute > Nodes.
    3. Verify and review the existing labels for the three master and three worker nodes.
  2. Change the labels of the nodes according to your storage requirements.
    1. Run the following command for each of the three nodes:
      
      oc label nodes <NodeNames> cluster.ocs.openshift.io/openshift-storage=''
      
  3. Optional: Enabling file system access for containers on Red Hat Enterprise Linux-based nodes.
    Note: This step does not apply to hosts that are based on Red Hat OpenShift Enterprise Linux CoreOS (RHCOS). In these cases, this step can be skipped.
    1. Run each of the following commands:
      
      subscription-manager repos --list-enabled | grep rhel-7-server
      
      
      subscription-manager repos --enable=rhel-7-server-rpms
      
      
      subscription-manager repos --enable=rhel-7-server-extras-rpms
      
      
      yum install -y policycoreutils container-selinux
      
      
      setsebool -P container_use_cephfs on
      
  4. Install the Red Hat OpenShift Container Storage operator.
    Create the namespace by running the following command:
    
    oc create ns openshift-storage
    
    Specify a blank node selector for the openshift-storage namespace by running the following command:
    
    oc annotate namespace openshift-storage openshift.io/node-selector=
    

    Then, to install the operator from the Red Hat OpenShift console:

    1. Open the Red Hat OpenShift web console.
    2. Go to Operators > OperatorHub.
    3. Search for and select the Red Hat OpenShift Container Storage operator and review the operator settings.
    4. Click Install. Wait for the installation to complete with a status of Succeeded.
  5. Install the local storage operator.
    Create the namespace:
    1. Open the Red Hat OpenShift web console .
    2. Go to Administration > Namespaces.
    3. Click Create Namespace.
    4. Under Name, enter local-storage.
    5. Under Default Network Policy, select No restrictions.
    6. Click Create.
    Install the operator:
    1. Go to Operators > OperatorHub.
    2. Search for and select the Local Storage operator and review the operator settings.
    3. Under Installed Namespace, select the local-storage namespace.
    4. Click Install. Wait for the installation to complete with a status of Succeeded.
  6. Find available storage devices.
    1. List and verify the name of the nodes with the Red Hat OpenShift Container Storage label. Run the following command:
      
      oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
      
    2. Use the lsblk command to view and note the IDs of the attached devices paths (devicePath:
      
      sh-4.4# chroot /host
      sh-4.4# lsblk
      
    3. Create a local-storage-block.yaml file and modify the storage class device paths with the device path values that you identified in the preceding step.
      
      ...
      
        storageClassDevices:
          - storageClassName: localblock
            volumeMode: Block
            devicePaths: 
              - <updated_devicePath_01>
              - <updated_devicePath_02>
              - <updated_devicePath_03_etc>
      
      Example
      device path /dev/sdb
    4. Create a local volume CR for block PVs by running the following command:
      
      oc create -f local-storage-block.yaml
      
    5. Verify that the new local storage class called localblock is created.
  7. Create the storage cluster service that uses the local block storage class.
    1. Open the Red Hat OpenShift web console .
    2. Go to Operators > Installed Operators.
    3. Ensure that the selected project is openshift-storage.
    4. Select the Red Hat OpenShift Container Storage installed operator.
    5. From the operator details page, create a storage cluster service:
      1. In the Details tab, go to Provided APIs > OCS Storage Cluster.
      2. Click Create Instance and review the existing settings.
      3. On the Create Storage Cluster page, ensure that the appropriate nodes are selected.
      4. Select a minimum of 3, or a multiple of 3, worker nodes from the available list for the use of the Red Hat OpenShift Container Storage service.
      5. Ensure that the Select Mode value is Internal-attached Device.
      6. Click Create.
  8. Verify the OCS deployment.
    Ensure that the Red Hat OpenShift Container Storage cluster is working correctly.
    1. Open the Red Hat OpenShift web console .
    2. Go to Overview > Persistent Storage.
    3. Ensure that the OCS Cluster status contains a green checkmark to indicate that it is working correctly.

    Ensure that you can reach the Noobaa Management Console. To do so, in the openshift-storage project, go to Networking > Routes. Then, click the URL that corresponds to noobaa-mgmt. Use a supported web browser such as Google Chrome.

  9. Verify that the Red Hat OpenShift Container Storage-specific storage classes and PVs exist.
    To verify the storage classes.
    1. Open the Red Hat OpenShift web console.
    2. Go to Storage > Storage Classes.
    3. Ensure that the following storage classes exist.
      • ocs-storagecluster-ceph-rbd
      • ocs-storagecluster-cephfs
      • openshift-storage.noobaa.io
      • ocs-storagecluster-ceph-rgw
    To verify the PVs, run the following command:
    
    oc get pv
    

What to do next

You can create S3 compatible object storage that is used by applications that require it, such as Maximo Assist.
  1. Create the objectstore.yaml file to create the ceph object store:
    
    ---
    apiVersion: ceph.rook.io/v1
    kind: CephObjectStore
    metadata:
      name: object
      namespace: openshift-storage
    spec:
      dataPool:
        compressionMode: ""
        crushRoot: ""
        deviceClass: ""
        erasureCoded:
          algorithm: ""
          codingChunks: 0
          dataChunks: 0
        failureDomain: host
        replicated:
          requireSafeReplicaSize: false
          size: 2
          targetSizeRatio: 0
      gateway:
        allNodes: false
        instances: 1
        placement: {}
    port: 8081
        resources: {}
        securePort: 0
        sslCertificateRef: ""
      metadataPool:
        compressionMode: ""
        crushRoot: ""
        deviceClass: ""
        erasureCoded:
          algorithm: ""
          codingChunks: 0
          dataChunks: 0
        failureDomain: host
        replicated:
          requireSafeReplicaSize: false
          size: 2
          targetSizeRatio: 0
      preservePoolsOnDelete: false
    
  2. Apply the objectstore.yaml file to the cluster to create the ceph object store:
    
    oc apply -f objectstore.yaml
    
  3. Create the user for the object store YAML file objectuser.yaml:
    
    ---
    apiVersion: ceph.rook.io/v1
    kind: CephObjectStoreUser
    metadata:
      name: object
      namespace: openshift-storage
    spec:
      displayName: s3-user3
      store: object
    
  4. Apply the YAML to the cluster to create the ceph object store user:
    
    oc apply -f objectuser.yaml
    
  5. Verify and Check the svc and secret information.
    
    oc get svc | grep -i rook-ceph-rgw-object
    
    Sample output
    rook-ceph-rgw-object                               ClusterIP      172.30.160.7     <none>        8081/TCP
    
  6. Get the S3 ObjectStore Access information:
    
    oc extract secret/rook-ceph-object-user-object-object  -n openshift-storage --keys=AccessKey --to=-
    
    Sample output
    # AccessKey
    UBQZ3DMD7RZ2EXK38RHL
    
    
    oc extract secret/rook-ceph-object-user-object-object  -n openshift-storage --keys=SecretKey --to=-
    
    Sample output
    # SecretKey
    9gWm811InDtx2WwUdpjDbkjxCNNlpwbN5KAGajIU
    
    
    oc extract secret/rook-ceph-object-user-object-object -n openshift-storage --keys=Endpoint --to=-
    
    Sample output
    # Endpoint
    http://rook-ceph-rgw-object.openshift-storage.svc:8081
    
  7. Create route for external access with Edge tls termination by using the following YAML file rgw.yaml:
    
    ---
    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      labels:
        app: rook-ceph-rgw
        ceph_daemon_id: object
        rgw: object
        rook_cluster: openshift-storage
        rook_object_store: object
      name: rgw
      namespace: openshift-storage
    spec:
      host: rgw-openshift-storage.apps.cluster1.ibmmasdocs.com
      port:
        targetPort: http
      tls:
        termination: edge
      to:
        kind: Service
        name: rook-ceph-rgw-object
        weight: 100
      wildcardPolicy: None
    
  8. Apply the YAML rgw.yaml to the cluster:
    
    oc apply -f rgw.yaml
    
  9. Validate the route information:
    
    oc get route rgw
    
    Sample output
As a result, the Ceph object storage access information is:
URL
https:// rgw-openshift-storage.apps.cluster1.ibmmasdocs.com
Username (accessKey)
UBQZ3DMD7RZ2EXK38RHL
Password: (SecretKey)
9gWm811InDtx2WwUdpjDbkjxCNNlpwbN5KAGajIU