Installing Fusion Data Foundation on managed clusters
In order to configure storage between the two OpenShift Container Platform clusters, Fusion Data Foundation operator must be installed first on each managed cluster.
Before you begin
Ensure that you have met the hardware requirements for Fusion Data Foundation external deployments. For a detailed description of the hardware requirements, see External mode requirements.
Procedure
- Install and configure the latest Fusion Data Foundation cluster on each of the managed clusters.
- After installing the operator, create a StorageSystem using the Full
deployment type and Connect with external storage platform, where
your Backing storage type is IBM Storage
Ceph.
For more information, see Deploying Data Foundation in external mode.
- Use the following flags with the
ceph-external-cluster-details-exporter.py script.
- At a minimum, you must use the following three flags.
--rbd-data-pool-name
- With the name of the RBD pool that was created during IBM Storage
Ceph deployment for OpenShift Container Platform. For
example, the pool can be called
rbdpool
. --rgw-endpoint
- Provide the endpoint in the format
<ip_address>:<port>
. It is the RGW IP of the RGW daemon running on the same site as the OpenShift Container Platform cluster that you are configuring. --run-as-user
- With a different client name for each site.
- The following flags are
optional
if default values were used during the IBM Storage Ceph deployment.--cephfs-filesystem-name
- With the name of the CephFS file system created during IBM Storage
Ceph deployment for OpenShift Container Platform, the
default file system name is
cephfs
. --cephfs-data-pool-name
- With the name of the CephFS datapool created during IBM Storage
Ceph deployment for OpenShift Container Platform, the
default pool is called
cephfs.data
. --cephfs-metadata-pool-name
- With the name of the CephFS metadata pool created during IBM Storage
Ceph deployment for OpenShift Container Platform, the
default pool is called
cephfs.meta
.
- At a minimum, you must use the following three flags.
- Run the following command on the bootstrap node, ceph1, to get the IP for the RGW
endpoints in datacenter1 and
datacenter2.
Example output:ceph orch ps | grep rgw.objectgw
rgw.objectgw.ceph3.mecpzm ceph3 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp rgw.objectgw.ceph6.mecpzm ceph6 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp
host ceph3.example.com host ceph6.example.com
Example output:
ceph3.example.com has address 10.0.40.24 ceph6.example.com has address 10.0.40.66
- Run the ceph-external-cluster-details-exporter.py with the
parameters that are configured for the first OpenShift Container Platform managed cluster
cluster1
on bootstrapped nodeceph1
.python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --<rgw-endpoint> XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster1 > ocp-cluster1.json
Note:Modify the
<rgw-endpoint> XXX.XXX.XXX.XXX
according to your environment. - Run the ceph-external-cluster-details-exporter.py with the
parameters that are configured for the first OpenShift Container Platform managed cluster
cluster2
on bootstrapped nodeceph1
.python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --rgw-endpoint XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster2 > ocp-cluster2.json
Note:Modify the
<rgw-endpoint> XXX.XXX.XXX.XXX
according to your environment. - Save the two files generated in the bootstrap cluster (ceph1) ocp-cluster1.json and ocp-cluster2.json to your local machine.
- Use the contents of file ocp-cluster1.json on the OpenShift Container Platform console on cluster1 where external Fusion Data Foundation is being deployed.
- Use the contents of file ocp-cluster2.json on the OpenShift Container Platform console on cluster2 where external Fusion Data Foundation is being deployed.
- Use the following flags with the
ceph-external-cluster-details-exporter.py script.
- Review the settings and then select Create StorageSystem.
- Validate the successful deployment of Fusion Data Foundation on each managed cluster with the following
command:
oc get storagecluster -n openshift-storage ocs-external-storagecluster -o jsonpath='{.status.phase}{"\n"}'
For the Multicloud Gateway (MCG):
Wait for the status result to be Ready for both queries on the Primary-managed cluster and the Secondary-managed cluster.oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{"\n"}'
What to do next
From the Red Hat OpenShift Web Console, navigate to Installed
Operators > Fusion Data Foundation > Storage
System > ocs-storagecluster-storagesystem > Resources
and verify that the Status of StorageCluster
is
Ready and has a green tick mark next to it.