Creating Fusion Data Foundation cluster

Before you begin

Ensure that all the requirements that are mentioned in Requirements for enabling stretch cluster section are met.

About this task

Create a Fusion Data Foundation cluster after you install the Fusion Data Foundation operator.

Procedure

  1. In the OpenShift Web Console, click Operators > Installed Operators to view all the installed operators.

    Ensure that the Project selected is openshift-storage.

  2. Click on the Fusion Data Foundation operator and then click Create StorageSystem.
  3. In the Backing storage page, select the Create a new StorageClass using the local storage devices option.
  4. Click Next.
    Important: You are prompted to install the Local Storage Operator if it is not already installed. Click Install, and follow the procedure as described in Installing Local Storage Operator.
  5. In the Create local volume set page, provide the following information:
    1. Enter a name for the LocalVolumeSet and the StorageClass.

      By default, the local volume set name appears for the storage class name. You can change the storage class name.

    2. Choose one of the following:
      • Disks on all nodes

        Uses the available disks that match the selected filters on all the nodes.

      • Disks on selected nodes
        Uses the available disks that match the selected filters only on selected nodes.
        Important:

        If the nodes selected do not match the Fusion Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed.

        For minimum starting node requirements, see the Resource requirements section in the Planning guide.

    3. Select SSD or NVMe to build a supported configuration. You can select HDDs for unsupported test installations.
    4. Expand the Advanced section and set the following options:
      Volume Mode Block is selected by default.
      Device Type Select one or more device type from the dropdown list.
      Disk Size Set a minimum size of 100 GB for the device and maximum available size of the device that needs to be included.
      Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes.
    5. Click Next.
      A message window to confirm the creation of LocalVolumeSet is displayed.
    6. Click Yes to continue.
  6. In the Capacity and nodes page, configure the following:
    1. Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up.
The Selected nodes list shows the nodes based on the storage class.
    2. Select Enable arbiter checkbox if you want to use the stretch clusters. This option is available only when all the prerequisites for arbiter are fulfilled and the selected nodes are populated. For more information, see Arbiter stretch cluster requirements in Requirements for enabling stretch cluster.
    3. Select the arbiter zone from the dropdown list.
    4. Choose a performance profile for Configure performance.
      You can also configure the performance profile after the deployment using the Configure performance option from the options menu of the StorageSystems tab.
      Note: Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles.
    5. Click Next.
  7. Optional: In the Security and network page, configure the following based on your requirement:
    1. To enable encryption, select Enable data encryption for block and file storage.
    2. Select one of the following Encryption level:
      • Cluster-wide encryption to encrypt the entire cluster (block and file).
      • StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class.
    3. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
      1. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected Thales CipherTrust Manager (using KMIP), go to step iii.
      2. Select an Authentication Method.
        • Using Token authentication method
        • Using Kubernetes authentication method
      3. Enter unique Connection Name, host Address of Vault server ('https://<hostname or ip>''), Port number, and Token.
      4. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration.
        • Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation.
        • Optional: Enter TLS Server Name and Vault Enterprise Namespace.
        • Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
        • Click Save and skip to step e.
    4. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below:
      1. Enter a unique Connection Name for the Key Management service within the project.
      2. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:

        • Address: 123.34.3.2
        • Port: 5696
      3. Upload the CA Certificate, Client Certificate and Client Private Key.

      4. If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
      5. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local.
    5. Network is set to Default (OVN) if you are using a single network. You can switch to Custom (Multus) if you are using multiple network interfaces and then choose any one of the following:
      1. Select a Public Network Interface from the dropdown.
      2. Select a Cluster Network Interface from the dropdown.
      Note: If you are using only one additional network interface, select the single NetworkAttachementDefinition, that is,ocs-public-cluster for the Public Network Interface, and leave the Cluster Network Interface blank.
    6. Click Next.
  8. In the Data Protection page, click Next.
  9. In the Review and create page, review the configuration details.
    To modify any configuration settings, click Back to go back to the previous configuration page.
  10. Click Create StorageSystem.
  11. Verification steps
    • Verify the final Status of the installed storage cluster:
      1. In the OpenShift Web Console, navigate to Installed Operators > > Storage System > ocs-storagecluster-storagesystem > Resources.
      2. Verify that the Status of StorageCluster is Ready and has a green tick mark next to it.
    • For arbiter mode of deployment:

      1. In the OpenShift Web Console, navigate to Installed Operators > > Storage System > ocs-storagecluster-storagesystem > Resources > ocs-storagecluster
      2. In the YAML tab, search for the arbiter key in the spec section and ensure enable is set to true.
        spec:
            arbiter:
              enable: true
            [..]
            nodeTopologies:
              arbiterLocation: arbiter #arbiter zone
            storageDeviceSets:
            - config: {}
              count: 1
                [..]
              replica: 4
        status:
            conditions:
            [..]
            failureDomain: zone
    • To verify that all the components for Fusion Data Foundation are successfully installed, see Verifying OpenShift Data Foundation deployment.

What to do next

Installing Zone Aware Sample Application