Deploying Data Foundation external storage cluster

Use this procedure to deploy an external storage cluster to add additional storage or expand your current internal storage cluster.

Before you begin

  • An Fusion Data Foundation cluster deployed in internal mode.
  • Ensure that both the OpenShift container Platform and Fusion Data Foundation are upgraded to version 4.15.

Procedure

  1. In the OpenShift web Console, navigate to Storage > Data Foundation > Storage Systems tab.
  2. Click Create StorageSystem.
  3. In the Backing storage page, Connect an external storage platform is selected by default.
    1. Choose Red Hat Ceph Storage as the Storage platform from available options.
    2. Click Next.
  4. In the Security and Network page
    1. Optional: To select encryption, select Enable encryption checkbox.
    2. Click on the Download Script link to download the python script for extracting Ceph cluster details.
    3. For extracting the IBM Storage Ceph cluster details, contact the IBM Storage Ceph administrator to run the downloaded python script on a IBM Storage Ceph node with the admin key.
      1. Run the following command on the IBM Storage Ceph node to view the list of available arguments:
        python3 ceph-external-cluster-details-exporter.py --help
        Important: Use python instead of python3 if the Ceph Storage cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster.

        You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment).

        Note: Use the yum install cephadm command and then the cephadm command to deploy your IBM Storage Ceph cluster using containers. You must pull the IBM Storage Ceph cluster container images using the cephadm command, rather than using yum for installing the Ceph packages onto nodes.

        For more information, see IBM Storage Ceph documentation.

      2. To retrieve the external cluster details from the IBM Storage Ceph cluster, use one of the following options:
        • Use the config-file flag. This stores the parameters used during deployment.

          In new deployments, you can save the parameters used during deployment in a configuration file. This file can be used during the upgrade to preserve the parameters as well as add any additional parameters. Use the config-file flag to set the path to the configuration file.

          An example of a configuration saved in the /config.ini is as follows:

          [Configurations]
          format = bash
          cephfs-filesystem-name = <filesystem-name>
          rbd-data-pool-name = <pool_name>
          ...

          Run the following command to set the path to the /config.ini using the config-file flag:

          python3 ceph-external-cluster-details-exporter.py --config-file /config.ini
        • Retrieve the external cluster details from the IBM Storage Ceph cluster and pass the parameters for your deployment.

          python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name>  [optional arguments]

          For example:

          python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs
          Example with restricted auth permission:
          python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true
          Example of JSON output generated using the python script:
          [{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
      3. Save the JSON output to a file with .json extension.
        Note: For Fusion Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remains unchanged on the IBM Storage Ceph external cluster after the storage cluster creation.
      4. Run the command when there is a multi-tenant deployment in which IBM Storage Ceph cluster is already connected to Fusion Data Foundation deployment with a lower version.
        python3 ceph-external-cluster-details-exporter.py --upgrade
      5. Click Browse to select and upload the JSON file.

        The content of the JSON file is populated and displayed in the text box.

      6. Click Next which is enabled only after you upload the JSON file.
  5. In the Review and create page, review the configuration details.
    To modify any configuration settings, click Back to go back to the previous configuration page.
  6. Click Create StorageSystem.
  7. Verify the StorageSystem creation
    1. Navigate to Storage > Data Foundation > Storage System tab and verify that you can view all storage clusters.
    2. Verify that all components for the external Fusion Data Foundation are successfully installed. See for instructions.

What to do next