Installing Fusion Data Foundation
Install Fusion Data Foundation in local, dynamic, external, consumer, or provider modes.
Before you begin
-
If you already have an installation of Fusion Data Foundation cluster or Red Hat® OpenShift® Data Foundation cluster, then the IBM Fusion UI automatically discovers it. However, in the Data Foundation page, you can only view the Usable capacity, Health, and information about the storage nodes.
If it is Red Hat OpenShift Data Foundation, migrate it to Fusion Data Foundation by following the instructions mentioned in Upgrading Red Hat OpenShift Data Foundation to IBM Storage Fusion Data Foundation.
- If you have a Rook-Ceph operator, then contact IBM Support for the expected behavior.
- You can use a maximum of nine storage devices per node. The high number of
storage devices leads to a higher recovery time during the loss of a node. This recommendation
ensures that nodes stay below the cloud provider dynamic storage device attachment limits, and
limits the recovery time after node failure with local storage devices.
It is recommended to add nodes in the multiple of three, each of them in different failure domains.
For deployments having three failure domains, you can scale up the storage by adding disks in the multiple of three, with the same number of disks coming from nodes in each of the failure domains.
-
Run following steps to ensure that there is no previously installed Fusion Data Foundation, Red Hat OpenShift Data Foundation, OpenShift Container Storage, or Rook-Ceph in this OpenShift cluster before you enable the Fusion Data Foundation service:
- Verify that the namespace
openshift-storagedoes not exist. - Run the following command to confirm that no resources exist:
oc get storagecluster -A; oc get cephcluster -A - Run the following command to confirm that none of the nodes have a Fusion Data Foundation storage label or Red Hat OpenShift Data Foundation storage label:
oc get node -l cluster.ocs.openshift.io/openshift-storage=
- Verify that the namespace
-
- Infrastructure nodes
- Ensure that the following prerequisites are met:
- For infrastructure requirements, see Infrastructure requirements of Fusion Data Foundation.
- For Fusion Data Foundation resource requirements, see ODF sizer tool. For common Fusion Data Foundation components per instance requirements of CPU and Memory, see Resource requirements.
- If you want to deploy Fusion Data Foundation in an
Infra node, make sure the worker node label
exists.
oc get node -l "node-role.kubernetes.io/worker=" - Ensure that there are no taints in the infra or compute nodes that are used as storage nodes.
For more information about the limitations and considerations of using OVE as an alternative to OpenShift Container Platform platform, see OVE considerations.
About this task
- Both Data Foundation service and
Global Data Platform service can coexist during the following
scenarios:
- Deploy Data Foundation service without
dedicatedmode. - Deploy Data Foundation in
dedicatedmode first, and then deploy Global Data Platform service.
- Deploy Data Foundation service without
- For Red Hat OpenShift Data Foundation deployed outside of IBM Fusion, you cannot add nodes or capacity from IBM Fusion. See Red Hat OpenShift Data Foundation Scaling storage document.
- For consumer mode cluster with Fusion Data Foundation, you do not install IBM Fusion spoke cluster or Fusion Data Foundation from the user interface. These installations are automatically run from the IBM Fusion HCI hub cluster.
- Data Foundation service supports five storage types:
Local, Dynamic, External, and consumer. Only one device type can be used for Data Foundation service in an OpenShift Container Platform. For more information about the different
deployment modes, see Storage cluster deployment approaches.The following table shows supported platforms and which device types are supported in each platform.
Platform Support VMware Local, dynamic, and external device Bare Metal Local, external device, consumer, provider mode Linux on IBM Z Local, dynamic, and external device IBM Power Systems Local and external device ROKS on IBM Cloud (ROKS on VPC and Classic Bare Metal) Cloud-managed Fusion Data Foundation service. For the procedure to install, see Understanding Fusion Data Foundation. Self managed OpenShift Container Platform on Microsoft Azure Dynamic Self managed OpenShift Container Platform on Amazon Web Services Dynamic Amazon Web Services ROSA and Amazon Web Services ROSA HCP Dynamic Self managed OpenShift Container Platform on Google Cloud Dynamic - For IBM Fusion services and platform support matrix, see IBM Fusion Services platform support matrix.
Procedure
What to do next
Configure the Fusion Data Foundation storage service by following the instructions mentioned in Configuring Fusion Data Foundation.
Optionally, you can also setup encryption to use an external Key Management System. For the procedure to set up encryption, see Preparing to connect to an external KMS server in Fusion Data Foundation.