Data Foundation

Install Fusion Data Foundation in local, dynamic, external, consumer, or provider modes.

Before you begin

  • If you already have an installation of Fusion Data Foundation cluster or Red Hat® OpenShift® Data Foundation cluster, then the IBM Fusion UI automatically discovers it. However, in the Data Foundation page, you can only view the Usable capacity, Health, and information about the storage nodes.
  • If you have a Rook-Ceph operator, then contact IBM Support for the expected behavior.
  • You can use a maximum of nine storage devices per node. The high number of storage devices leads to a higher recovery time during the loss of a node. This recommendation ensures that nodes stay below the cloud provider dynamic storage device attachment limits, and limits the recovery time after node failure with local storage devices.

    It is recommended to add nodes in the multiple of three, each of them in different failure domains.

    For deployments having three failure domains, you can scale up the storage by adding disks in the multiple of three, with the same number of disks coming from nodes in each of the failure domains.

  • Run following steps to ensure that there is no previously installed Fusion Data Foundation, Red Hat OpenShift Data Foundation, OpenShift Container Storage, or Rook-Ceph in this OpenShift cluster before you enable the Fusion Data Foundation service:

    1. Verify that the namespace openshift-storage does not exist.
    2. Run the following command to confirm that no resources exist:
      oc get storagecluster -A; oc get cephcluster -A
    3. Run the following command to confirm that none of the nodes have a Fusion Data Foundation storage label or Red Hat OpenShift Data Foundation storage label:
      oc get node -l cluster.ocs.openshift.io/openshift-storage= 
  • Infrastructure nodes
    Ensure that the following prerequisites are met:
    • For infrastructure requirements, see Infrastructure requirements of Fusion Data Foundation.
    • For Fusion Data Foundation resource requirements, see ODF sizer tool. For common Fusion Data Foundation components per instance requirements of CPU and Memory, see Resource requirements.
    • If you want to deploy Fusion Data Foundation in an Infra node, make sure the worker node label exists.
      oc get node -l "node-role.kubernetes.io/worker="
    • Ensure that there are no taints in the infra or compute nodes that are used as storage nodes.

About this task

To know more about the different deployment modes, see Fusion Data Foundation. For more information about the different deployment modes, see Storage cluster deployment approaches.

  • Both Data Foundation service and Global Data Platform service can coexist during the following scenarios:
    • Deploy Data Foundation service without dedicated mode.
    • Deploy Data Foundation in dedicated mode first, and then deploy Global Data Platform service.
  • For Red Hat OpenShift Data Foundation deployed outside of IBM Fusion, you cannot add nodes or capacity from IBM Fusion. See Red Hat OpenShift Data Foundation Scaling storage document.
  • For consumer mode cluster with Fusion Data Foundation, you do not install IBM Fusion spoke cluster or Fusion Data Foundation from the user interface. These installations are automatically run from the IBM Fusion HCI System hub cluster.
  • Data Foundation service supports five storage types: Local, Dynamic, External, and consumer. Only one device type can be used for Data Foundation service in an OpenShift Container Platform.

    To know more about the different deployment modes, see Fusion Data Foundation. For more information about the different deployment modes, see Storage cluster deployment approaches.

    The following table shows supported platforms and which device types are supported in each platform.
    Platform Support
    VMware Local, dynamic, and external device
    Bare Metal Local, external device, consumer, provider mode
    Linux on IBM Z Local, dynamic, and external device
    IBM Power Systems Local and external device
    ROKS on IBM Cloud (ROKS on VPC and Classic Bare Metal) Cloud-managed Fusion Data Foundation service. For the procedure to install, see Understanding Fusion Data Foundation.
    Self managed OpenShift Container Platform on Microsoft Azure Dynamic storage class
    Self managed OpenShift Container Platform on Amazon Web Services Dynamic storage class
    Amazon Web Services ROSA and Amazon Web Services ROSA HCP Dynamic storage class
    Self managed OpenShift Container Platform on Google Cloud Dynamic storage class
  • For IBM Fusion services and platform support matrix, see IBM Fusion Services platform support matrix.

Procedure

  1. Go to Services page in IBM Fusion user interface.
  2. In the Available section, click the Data Foundation tile.
  3. In the Data Foundation page, go through the details about the service and click Install.
  4. In the Install service window, select the Device type. The available options can be Local, Dynamic, or External based on which option is supported on your platform.

    If you want to use storage through a provider and work with Data Foundation service storage as a consumer, then there is no Install button available. A notification is displayed asking you to connect to the provider cluster.

  5. Choose whether to enable Automatic updates and click Install.
    Note: If you select Automatic updates, then IBM Fusion auto upgrades Fusion Data Foundation version.

    For example, 4.14.x to 4.14.y, 4.16.x to 4.16.y, and across the subscription channels like 4.14.x to 4.15.y, 4.15.x to 4.16.x. For storage support matrix, see Data services version support matrix.

    However, it does not auto convert Red Hat OpenShift Data Foundation to Fusion Data Foundation.

    The Fusion Data Foundation operator starts to deploy and you can see Data Foundation in the Installed section of the Services page. Initially, the status shows as Installing and progress of the installation is mentioned in percentage. After successful completion of the installation, the status changes to Running. A Data Foundation page is added to the menu.
  6. To validate the installation, do the following steps in OpenShift Container Platform web console:
    • Go to Installed Operators to check whether the Fusion Data Foundation operator is listed and Status shows as Succeeded.
    • Additionally, if the installed platform is Bare Metal, Linux on IBM Z, or VMware using local device, check whether the Local Storage operators is installed and its status shows as Succeeded.
  7. Verify the Fusion Data Foundation service installation.
    1. To verify the installation of Fusion Data Foundation on IBM Fusion, perform the checks specified in Verify the Fusion Data Foundation.
    2. Additionally, run the command to verify that Fusion Data Foundation service is installed successfully:
      oc describe odfmanager/odfmanager -n <Fusionns>

      Update <Fusions> with your namespace value.

    Table 1. Install states Fusion Data Foundation service
    Install State Description
    Installing Fusion Data Foundation installation is ongoing, and there are no errors yet.
    Failing There is an installation error but IBM Fusion retries to install Fusion Data Foundation. If the problem is solved, this status changes to Installing or Completed.
    Completed Fusion Data Foundation installation completed successfully.
    Table 2. Upgrade states Fusion Data Foundation service
    Upgrade State Description
    Not started Fusion Data Foundation upgrade has not started yet.
    Upgrading Fusion Data Foundation upgrade is ongoing, and there are no errors yet.
    Failing There is an installation error but IBM Fusion retries to upgrade Fusion Data Foundation. If the problem is solved, this status changes to Upgrading or Completed.
    Completed Fusion Data Foundation upgrade completed successfully.
    Table 3. Health states Fusion Data Foundation service
    State Description
    Installing Fusion Data Foundation installation is ongoing. For more information about installation status details, see Table 1.
    Upgrading Fusion Data Foundation upgrade is ongoing. For more information about upgrading status details, see Table 2.
    Healthy Fusion Data Foundation installation completed successfully and the service is in normal state.
    Degraded Fusion Data Foundation installation completed successfully but the service is not in normal state.

What to do next

To install and work with other dependent services, configure storage in your environment. Provision the amount of usable capacity that your environment needs and select the nodes to run the Data Foundation workloads. For more information about the configuration, see Configuring Data Foundation storage.

Optionally, you also setup encryption to use an external Key Management System. For the procedure to set up encryption, see Preparing to connect to an external KMS server in Fusion Data Foundation.

Note: You can view the Usable capacity and Health for an externally discovered Fusion Data Foundation but cannot configure storage.