Deploying Data Foundation external storage cluster
Use this procedure to deploy an external storage cluster to add additional storage or expand your current internal storage cluster.
Before you begin
- An Fusion Data Foundation cluster deployed in internal mode.
- Ensure that both the OpenShift container Platform and Fusion Data Foundation are upgraded to version 4.15.
Procedure
- In the OpenShift web Console, navigate to Storage > Data Foundation > Storage Systems tab.
- Click Create StorageSystem.
- In the Backing storage page, Connect an external storage platform
is selected by default.
- Choose
Red Hat Ceph Storage
as the Storage platform from available options. - Click Next.
- Choose
- In the Security and Network page
- Optional: To select encryption, select Enable encryption checkbox.
- Click on the Download Script link to download the python script for extracting Ceph cluster details.
- For extracting the IBM Storage
Ceph cluster details,
contact the IBM Storage
Ceph administrator to run the downloaded
python script on a IBM Storage
Ceph node with the
admin key
.- Run the following command on the IBM Storage
Ceph node to view
the list of available arguments:
python3 ceph-external-cluster-details-exporter.py --help
Important: Usepython
instead ofpython3
if the Ceph Storage cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster.You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment).
Note: Use the yum install cephadm command and then the cephadm command to deploy your IBM Storage Ceph cluster using containers. You must pull the IBM Storage Ceph cluster container images using thecephadm
command, rather than using yum for installing the Ceph packages onto nodes.For more information, see IBM Storage Ceph documentation.
- To retrieve the external cluster details from the IBM Storage
Ceph cluster, use one of the following options:
-
Use the config-file flag. This stores the parameters used during deployment.
In new deployments, you can save the parameters used during deployment in a configuration file. This file can be used during the upgrade to preserve the parameters as well as add any additional parameters. Use the config-file flag to set the path to the configuration file.
An example of a configuration saved in the /config.ini is as follows:
[Configurations] format = bash cephfs-filesystem-name = <filesystem-name> rbd-data-pool-name = <pool_name> ...
Run the following command to set the path to the /config.ini using the config-file flag:
python3 ceph-external-cluster-details-exporter.py --config-file /config.ini
Retrieve the external cluster details from the IBM Storage Ceph cluster and pass the parameters for your deployment.
python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> [optional arguments]
For example:
Example with restricted auth permission:python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs
Example of JSON output generated using the python script:python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true
[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "xxx.xxx.xxx.xxx:xxxx", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "<fs-id>", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "<user-id>", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "<user-key>"}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "<pool>"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "xxx.xxx.xxx.xxx", "MonitoringPort": "xxxx"}}, {"name": "rook-ceph-dashboard-link", "kind": "Secret", "data": {"userID": "ceph-dashboard-link", "userKey": "<user-key>"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "<user-key>"}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "<admin-key>"}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "<admin-key>"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "cephfs", "pool": "cephfs_data"}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "xxx.xxx.xxx.xxx:xxxx", "poolPrefix": "default"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "<access-key>", "secretKey": "<secret-key>"}}]
-
- Save the JSON output to a file with
.json
extension.Note: For Fusion Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remains unchanged on the IBM Storage Ceph external cluster after the storage cluster creation. - Run the command when there is a multi-tenant deployment in which IBM Storage
Ceph cluster is already connected to Fusion Data Foundation deployment with a lower version.
python3 ceph-external-cluster-details-exporter.py --upgrade
-
Click Browse to select and upload the JSON file.
The content of the JSON file is populated and displayed in the text box.
- Click Next which is enabled only after you upload the JSON file.
- Run the following command on the IBM Storage
Ceph node to view
the list of available arguments:
- In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the previous configuration page.
- Click Create StorageSystem.
- Verify the StorageSystem creation
- Navigate to Storage > Data Foundation > Storage System tab and verify that you can view all storage clusters.
- Verify that all components for the external Fusion Data Foundation are successfully installed. See for instructions.