Adding file and object storage to an existing external Fusion Data Foundation cluster

Add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external Fusion Data Foundation cluster that was initially deployed to provide only block storage.

About this task

When Fusion Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims.
  • Persistent volume claims for block storage are provided directly from the external IBM Storage Ceph cluster.
  • Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external IBM Storage Ceph cluster.
  • Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external IBM Storage Ceph cluster.

Before you begin

Ensure you have the following:
  • Fusion Data Foundation is installed and running on the corresponding OpenShift Container Platform version with the Fusion Data Foundation cluster in external mode is in the Ready state.
  • Your external IBM Storage Ceph cluster is configured with one or both of the following:
    • A Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage.
    • A Metadata Server (MDS) pool for file storage.
  • Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external Fusion Data Foundation cluster deployment.

Procedure

  1. Download the Fusion Data Foundation version of the ceph-external-cluster-details-exporter.py Python script by using CSV or ConfigMap.
    Important: Starting with OpenShift Data Foundation version 4.19, downloading the ceph-external-cluster-details-exporter.py Python script using a CSV is no longer supported. The only supported method is using a ConfigMap.
    Command examples are as follows:
    • CSV
      oc get csv $(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print $1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py
    • ConfigMap
      oc get cm rook-ceph-external-cluster-script-config -n openshift-storage -o jsonpath='{.data.script}' | base64 --decode > ceph-external-cluster-details-exporter.py
  2. Update permission caps on the external IBM Storage Ceph cluster by running ceph-external-cluster-details-exporter.py on any client node in the external IBM Storage Ceph cluster.
    Your IBM Storage Ceph administrator need to run this command.
    python3 ceph-external-cluster-details-exporter.py --upgrade \
    --run-as-user=ocs-client-name \
    --rgw-pool-prefix rgw-pool-prefix
    --run-as-user
    The client name used during Fusion Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set.
    --rgw-pool-prefix
    The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used.
  3. Generate and save configuration details from the external IBM Storage Ceph cluster.
    1. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external IBM Storage Ceph cluster.
      # python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name  --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix
      --monitoring-endpoint
      (Optional) It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated.
      --monitoring-endpoint-port
      (Optional) It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint. If not provided, the value is automatically populated.
      --run-as-user
      The client name used during Fusion Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set.
      --rgw-endpoint
      (Optional) Provide this parameter to provision object storage through Ceph Object Gateway for Fusion Data Foundation.
      --rgw-pool-prefix
      The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used.
      User permissions are updated as shown:
      caps: [mgr] allow command config
      caps: [mon] allow r, allow command quorum_status, allow command version
      caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index
      Note: Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of Fusion Data Foundation in external mode.
    2. Save the output of the script in an external-cluster-config.json file.
      The following example output shows the generated configuration changes in bold text.
      [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
  4. Upload the generated JSON file.
    1. Log in to the OpenShift web console.
    2. Click Workloads > Secrets.
    3. Set Project to openshift-storage.
    4. Click on rook-ceph-external-cluster-details.
    5. Go to Actions > Edit Secret.
    6. Click Browse and upload the external-cluster-config.json file.
    7. Click Save.

What to do next

  • To verify that the Fusion Data Foundation cluster is healthy and data is resilient, go to Storage > Data Foundation > Storage Systems tab and then click on the storage system name.
    • From the Overview > Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy.
  • If you added a Metadata Server for file storage:
    1. Go to Workloads > Pods and verify that csi-cephfsplugin-&ast; pods are created new and are in the Running state.
    2. Go to Storage > Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created.
  • If you added the Ceph Object Gateway for object storage:

    1. Go to Storage > Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created.
    2. To verify that the Fusion Data Foundation cluster is healthy and data is resilient, go to Storage > Data Foundation > Storage Systems tab and then click on the storage system name.
    3. From the Object tab, confirm Object Service and Data resiliency has a green tick indicating it is healthy.