Configuring Data Foundation in external mode

From the Data Foundation user interface page, you get redirected to OpenShift® console to create an external storage system.

Before you begin

  • Install Fusion Data Foundation service with device type as external.
  • IBM Storage Ceph must have Ceph Dashboard that is installed and configured. For more information, see Dashboard > Ceph Dashboard installation and access of IBM Storage Ceph documentation.
  • It is recommended that the external IBM Storage Ceph cluster has the PG Autoscaler enabled.
  • The external Ceph cluster must have an existing RBD pool pre-configured for use. If it does not exist, contact your IBM Storage Ceph administrator to create one before you move ahead with Fusion Data Foundation deployment. IBM recommends to use a separate pool for each Fusion Data Foundation cluster.
  • Optional: If there is a zonegroup created apart from the default zonegroup, you need to add the hostname, rook-ceph-rgw-ocs-external-storagecluster-cephobjectstore.openshift-storage.svc to the zone group, as Fusion Data Foundation sends S3 requests to the RADOS Object Gateways (RGWs) with this hostname.

Procedure

  1. Create a StorageSystem.
    1. Go to the Data Foundation page.
    2. Click Configure storage.
    3. Click Create StorageSystem in the newly opened OpenShift Container Platform console page.
    4. In the Backing storage page, select the following options:
      • Select Full deployment for the Deployment type option.
      • Select Connect an external storage platform from the available options.
      • Select IBM Storage Ceph for Storage platform.
    5. Click Next.
    6. In the Connection details page, provide the necessary information:
      1. Click on the Download Script link to download the python script for extracting Ceph cluster details.
      2. For extracting the IBM Storage Ceph cluster details, contact the IBM Storage Ceph administrator to run the downloaded python script on a IBM Storage Ceph node with the admin key.
        1. Run the following command on the IBM Storage Ceph node to view the list of available arguments:
          python3 ceph-external-cluster-details-exporter.py --help
          Important: Use python instead of python3 if the Ceph Storage cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster.

          You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment).

          Note: Use the yum install cephadm command and then the cephadm command to deploy your IBM Storage Ceph cluster using containers. You must pull the IBM Storage Ceph cluster container images using the cephadm command, rather than using yum for installing the Ceph packages onto nodes.

          For more information, see IBM Storage Ceph documentation.

        2. To retrieve the external cluster details from the IBM Storage Ceph cluster, run the following command:
          python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name>  [optional arguments]

          For example:

          python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs
          Example with restricted auth permission:
          python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true
          Example of JSON output generated using the python script:
          [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
        3. Save the JSON output to a file with .json extension.
          Note: For Fusion Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remains unchanged on the IBM Storage Ceph external cluster after the storage cluster creation.
        4. Run the command when there is a multi-tenant deployment in which IBM Storage Ceph cluster is already connected to Fusion Data Foundation deployment with a lower version.
          python3 ceph-external-cluster-details-exporter.py --upgrade
      3. Click Browse to select and upload the JSON file.

        The content of the JSON file is populated and displayed in the text box.

      4. Click Next

        The Next button is enabled only after you upload the JSON file.

    7. Review if all the details are correct from the Review and create page.
      To modify any configuration settings, click Back to go back to the previous configuration page.
    8. Click Create StorageSystem.
  2. Verify the StorageSystem creation.
    1. From the OpenShift web console, go to Installed Operators > IBM Storage Fusion Data Foundation > Storage System > ocs-external-storagecluster-storagesystem > Resources.
    2. Verify that StorageCluster is in a Ready state and has a green tick.
  3. Verify Data Foundation page.
    From IBM Storage Fusion user interface, go to the Data Foundation page. It shows that the Data Foundation is deployed in external mode, and also shows Usable capacity and Health info in the page.