Configuring Fusion Data Foundation in provider mode

Configure Fusion Data Foundation in provider mode that acts as the provider and base storage for the host cluster.

Before you begin

  • Fusion Data Foundation service is installed with device type set to Local, as instructed in Installing Fusion Data Foundation.
  • Do not use GPU nodes to configure Fusion Data Foundation, as these nodes are dedicated for AI workloads.

Procedure

  1. On the Local storage page of the IBM Fusion web console, click Get started to initiate configuration of the Fusion Data Foundation service.
    It redirects you to the Create StorageSystem page in the Red Hat® OpenShift® Container Platform web console.
    Note: Ensure that you are logged in to the Red Hat OpenShift Container Platform web console as an administrator.
  2. On the Backing storage tab of the Create StorageSystem page, do as follows:
    1. Select Full Deployment for the Deployment type field.
    2. Select Create a new StorageClass using the local storage devices.
    3. Optional: Select Use Ceph RBD as the default StorageClass to avoid having to manually annotate a StorageClass.
    4. Optional: Select Use external PostgreSQL to use an external PostgreSQL [Technology preview].
      This provides high availability solution for Multicloud Object Gateway (MCG) where the PostgreSQL Pod is a single point of failure.
      Important:

      Fusion Data Foundation ships PostgreSQL images that are maintained by Red Hat, which are used to store metadata for the MCG. This PostgreSQL usage is at the application level.

      As a result, Fusion Data Foundation does not perform database-level optimizations or in-depth insights.

      You can use your own PostgreSQL that is well-maintained and optimized. Fusion Data Foundation supports external PostgreSQL instances.

      Any PostgreSQL-related issues require code changes or deep technical analysis need to be addressed upstream. This might result in longer resolution times.

      1. Provide the following connection details:
        • Username
        • Password
        • Server name and Port
        • Database name
      2. Select Enable TLS/SSL to enable encryption for the Postgres server.
    5. Click Next.
  3. On the Create local volume set tab, do as follows:
    1. Enter a name for the LocalVolumeSet and the StorageClass.
      The local volume set name appears as the default value for the storage class name. You can change the name.
      Remember: The storage class name must not exceed 40 characters and must not match any of the following predefined storage classes:
      • ocs-storagecluster-cephfs
      • ocs-storagecluster-ceph-rbd
      • ocs-storagecluster-ceph-rgw
      • openshift-storage.noobaa.io
      • ocs-storagecluster-ceph-rbd-virtualization
    2. Select one of the following options:
      • Disks on all nodes

        Uses the available disks that match the selected filters on all the nodes.

      • Disks on selected nodes

        Uses the available disks that match the selected filters only on the selected nodes.

      Important:
      • The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones.

        For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled.

      • Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on.
      • If the nodes selected do not match the Fusion Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed.

        For minimum starting node requirements, see Resource requirements.

    3. From the available list of Disk Type, select SSD/NVMe.
      Important: Ensure that you cleaned the disk before selecting it. For instructions, see Red Hat knowledge base article Uninstalling OpenShift Data Foundation in Internal mode.
    4. Expand the Advanced section and set the following options:
      • Volume Mode: Block is selected as the default value.
      • Device Type: Select one or more device types from the drop-down list
      • Disk Size: Set a minimum size of 100 GB for the device and maximum available size of the device that needs to be included.
      • Maximum Disks Limit: This indicates the maximum number of Persistent Volumes (PVs) that you can create on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes
    5. Click Next.
      A pop-up appears to confirm the creation of the LocalVolumeSet.
    6. Click Yes to continue.
  4. On the Capacity and nodes tab, do as follows:
    1. Check the Available raw capacity.
      Available raw capacity is populated with the capacity value based on all the attached disks that are associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class.
    2. In the Configure performance tab, select one of the following resource profiles:
      Important: Before selecting a resource profile, ensure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures.
      • Lean

        Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory.

      • Balanced (default)

        Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads.

      • Performance

        Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads.

      For more information about resource requirements, see Resource requirements for performance profiles.

      Note: You can configure the resource profile even after the deployment by selecting the Configure performance from the options menu in the StorageSystems tab.
    3. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for Fusion Data Foundation.
    4. Click Next.
  5. On the Security and network tab, configure the following based on your requirement:
    1. Select Enable data encryption for block and file storage to enable encryption.
    2. Select one or both of the following Encryption level:
      • Cluster-wide encryption

        Encrypts the entire cluster (block and file).

      • StorageClass encryption

        Creates encrypted persistent volume (block only) using encryption-enabled storage class.

    3. Optional: Select the Connect to an external key management service checkbox.

      This is optional for cluster-wide encryption.

      1. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP).

        If you selected Vault, do to the next step. If you selected Thales CipherTrust Manager (using KMIP), do step 5.c.iii.

      2. Select either of the following Authentication Method:

        • Using Token authentication method
          1. Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number, and Token.
          2. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

            • Enter the Key Value secret path in Backend Path that is dedicated and unique to Fusion Data Foundation.
            • Optional: Enter TLS Server Name and Vault Enterprise Namespace.
            • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
          3. Click Save and skip to step 5.d.
        • Using Kubernetes authentication method
          1. Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number, and Role name.
          2. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:

            • Enter the Key Value secret path in Backend Path that is dedicated and unique to Fusion Data Foundation.
            • Optional: Enter TLS Server Name and Vault Enterprise Namespace.
            • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
          3. Click Save and skip to step 5.d.
      3. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, do the following steps:

        1. Enter a unique Connection Name for the Key Management service within the project.
        2. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:
          • Address: 123.34.3.2
          • Port: 5696
        3. Upload the Client Certificate, CA certificate, and Client Private Key.
        4. If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption that is generated above.
        5. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local.
    4. Select Host as the Network option.
    5. Click Next.
  6. On the Review and create tab, review the configuration details.
    To modify any configuration settings, click Back to go back to the previous configuration page.
  7. Click Create StorageSystem.
  8. Create a load balancer service in the openshift-storage namespace as follows:
    To create a load balancer service in the openshift-storage namespace, you need the IP address pool name from the MetalLB operator. If the MetalLB operator is already installed and configured in your cluster, go to step 8.d. If not, complete steps 8.a, 8.b, and 8.c first.
    Note: Steps 8.a, 8.b, and 8.b are applicable starting from FDF 4.19 onward.
    1. From the Red Hat Operator Catalog, install MetalLB 4.16 or higher.
      Add the MetalLB operator to your cluster so that when a Service of type LoadBalancer is added to the cluster, MetalLB can add a fault-tolerant external IP address to the Service. For the procedure to install and validate MetalLB, see Red Hat Documentation. Go to your specific version of OpenShift Container Platform and check the MetalLB details.
    2. Create a MetalLB CR.
      For example:
      apiVersion: metallb.io/v1beta1
      kind: MetalLB
      metadata:
        name: metallb
        namespace: metallb-system
    3. Create an IP address pool.
      Note: Reserve a set of unused IPs from the same CIDR as the Bare Metal network for MetalLB. MetalLB uses these IPs for any load balancer service created on the cluster, and not only for the Hosted Control Plane.

      For example:

      apiVersion: metallb.io/v1beta1
      kind: IPAddressPool
      metadata:
        name: metallb
        namespace: metallb-system
      spec:
        addresses:
        - 9.9.0.51-9.9.0.70
      Note: When you configure the load balancer, other applications can also use the advertised addresses. Ensure that you have enough addresses for all workloads on this cluster and for any other OpenShift Container Platform clusters that you create.
    4. Define a load balancer service YAML.
      For example:
      kind: Service
      apiVersion: v1
      metadata:
        name: ocs-provider-server-load-balancer
        namespace: openshift-storage
        annotations:
          metallb.universe.tf/ip-allocated-from-pool: <address-pool-name-from-metallb>
      spec:
        ports:
          - name: provider
            protocol: TCP
            port: 50051
            targetPort: ocs-provider
            nodePort: 30756
        type: LoadBalancer
        selector:
          app: ocsProviderApiServer
      Remember: Replace the <address-pool-name-from-metallb> with the IP address pool name configured in the MetalLB operator. For example, metallb is the IP address pool name used in the example provided step 8.c.
    5. Apply the YAML.
      Command example:
      oc apply -f ocs-provider-server-load-balancer.yaml
  9. Validate whether Fusion Data Foundation is successfully configured in provider mode by following the instructions that are mentioned in Verifying Fusion Data Foundation provider mode configuration.

What to do next