Deployment configurations
The stand-alone rack, multiple rack, and disaster recovery deployment configurations are available in IBM Fusion HCI. The multi rack topology offers these variants: high-availability multi-rack, 3 zone high-availability multi-rack, and expansion rack.
Stand-alone rack
The stand-alone rack of IBM Fusion HCI consists of management switches, high speed switches, PDUs, storage nodes, compute nodes, and GPU nodes to form a single OpenShift® cluster. You can upsize the node to scale up the capacity of a stand-alone rack. It does not have fault tolerance for a complete rack failure.
To know more about each of the components of a Stand-alone rack, see Hardware overview.
High-availability multi-rack and 3 zone HA rack topology
IBM Fusion HCI comes with the bootstrapping software that is installed from the factory for installing Fusion HCI in data centre. For high available multi rack, the IBM support representative completes the initial verification and physically connects a minimum of three racks to network and power. Then, they conduct the network setup of the first two racks which are called auxiliary racks. Network setup of 3rd, also known as last rack, is done after first two racks. This network setup validates hardware and wiring of rack and connects the rack to the data centre network. This procedure configures all the default nodes. During network setup of last rack, software collects network configuration details from first two racks automatically and shows consolidated appliance view. After the network setup is complete for all three racks, the control plane is created across the three racks. In this setup, the cluster remains operational even when a rack fails, which ensures resilience and continuity. The IBM SSR provides stage 2 URL to customer to continue next phase of the installation.
- Single Zone HA
-
The OpenShift control plane is distributed across three racks. The Fusion Data Foundation replicas are distributed across three racks containing storage nodes.
The following diagram shows a single physical availability zone. In this setup, IBM Fusion HCI provides failure domain as a rack.
Figure 1. Three failure domains within a single physical availability zone
- 3-Zone HA
-
IBM Fusion HCI supports 3 zone HA where the racks are distributed across distinct locations within a region that are designed to be isolated from failures occurring in other zones. For 3 zone HA, each rack must be connected to the customer switches in such a way that they are in the same broadcast domain.
In a high-availability multi-rack or three Availability Zone (AZ) deployment, each node is automatically assigned a distinct zone label. This ensures that all nodes within a single rack share the same zone label, effectively creating three distinct zones for load distribution.
Deploy racks in each of the two or three availability zones to achieve a three AZ deployment of IBM Fusion HCI. The OpenShift control plane is distributed across racks in each of the three AZs. Fusion Data Foundation replicas are distributed across storage nodes in each AZ. Storage nodes are placed in the same rack as the control nodes. Additional racks of compute or GPU only servers provide additional worker nodes, and are placed in each AZ.
The following diagram shows how the OpenShift control plane and storage replicas are stretched across racks in separate availability zones.
Figure 2. OpenShift control plane and storage replicas are stretched across racks in separate availability zones
For both high-availability multi-rack and 3-Zone HA, Fusion Data Foundation is the only supported storage type. The Fusion Data Foundation storage spreads its resources in these three racks so that you do not lose data when a rack goes down either because of maintenance or power failure. The three side-by-side (adjacent) appliances act as a single unit and hosts a single Fusion Data Foundation storage and OpenShift Container Platform cluster. To achieve high availability, 1 control node (Rack Unit 2) is available in each of these three racks so even when one rack goes down, the cluster stays available. The components in each of the three racks are the same as a single rack.
The service node can be made available in any one of the racks. The rack to which the service node is connected is designated and used as the base rack.
Expansion racks
The expansion rack extends the capacity of an existing running cluster, which can be stand-alone system, high-availability multi-rack setup, or a previously expanded configuration.
- For Global Data Platform based topology, spine switches are required to connect the provisioning network and storage network between the racks. The OpenShift network gets connected to customer switch.
- For Fusion Data Foundation based topology, all the networks are connected to the customer switch.
Disaster recovery topology
IBM Fusion HCI supports disaster recovery (DR) through two primary configurations: Metro-DR and Regional-DR. The Metro-DR topology provides synchronous data replication between two IBM Fusion HCI clusters located within metropolitan distances (with latency under 40 milliseconds). The Regional-DR topology uses asynchronous data replication between clusters located at long distances or two geographically separated clusters. While it introduces a slight delay in data synchronization, it is ideal for scenarios where geographical separation is necessary for disaster recovery.