The prerequisites before you create virtualized or Bare Metal
Hosted Control Plane clusters.
Before you begin
- The single or multi-rack installation of IBM Fusion HCI with a minimum of three compute nodes.
- The version of OpenShift® Container Platform
must be 4.16 or higher.
- Install and configure Fusion Data Foundation provider mode
storage type on the hub cluster.
- If you plan to install Backup & Restore service,
ensure that 12 GiB memory is available post the installation.
- Verify that you have all the necessary LVM drives before you initiate LVM
configuration on the hub cluster. For more information about LVM drives, see Dedicated drives for LVM filesystems.
Procedure
-
From the Red Hat® Operator Catalog, install
MetalLB 4.16 or higher.
Add the MetalLB Operator to your cluster so that when a service of type
LoadBalancer
is added to the cluster, MetalLB can add a fault-tolerant external IP
address for the service. For the procedure to install and validate MetalLB, see
https://docs.openshift.com/. Go to your specific version of
OpenShift Container Platform and check MetalLB details.
Note: When
you set up the load balancer, other applications can also get the advertised addresses. You must
have enough addresses for any workloads on this cluster and created OpenShift Container Platform clusters.
- Create MetalLB CR based on the following example:
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
- Create an IPaddress pool.
Note: Reserve a set of unused IPs on the same CIDR as the Bare Metal network of
the IBM Fusion HCI cluster for MetalLB. The
MetalLB serves these IPs to any load-balancer service that is installed on the cluster and not just
the Hosted Control Plane.
Example:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: metallb
namespace: metallb-system
spec:
addresses:
- 9.9.0.51-9.9.0.70
Note: When you set up the load balancer, other applications can also get the advertised addresses.
You must have enough addresses for any workloads on this cluster and created OpenShift Container Platform clusters.
- Create
L2advertisement
based on the following example:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2advertisement
namespace: metallb-system
spec:
ipAddressPools:
- metallb
- Install Multi Cluster Engine for Kubernetes operator.
- Create a Multi Cluster Engine CR instance based on the following example:
apiVersion: multicluster.openshift.io/v1
kind: MultiClusterEngine
metadata:
name: multiclusterengine
spec:
targetNamespace: multicluster-engine
Change the parameter values according to your
environment.
- Wait until the Multi Cluster Hub instance is created, available, and in Running
state.
The Multi Cluster Engine instance must be available.
Run the following command
to check whether the Multi Cluster Engine instance is
available.
$ oc get multiclusterengine
Example
output:
$ oc get multiclusterengine
NAME STATUS AGE
multiclusterengine Available 170m
- Download hosted control plane CLI from OpenShift Container Platform console:
- Go to the Command Line Tools page.
- From the Hosted Control Plane - Hosted Control
Plane Command Line Interface (CLI) section, download the CLI tar and extract from archive based on
your platform.
Note: In MCE, you can use the user interface instead of CLI. However, the CLI gives more options
than the user interface.
- On the hub cluster, configure LVM by using the local
drives for the etcd pods of hosted clusters. To ensure optimal etcd performance, LVM must be set up
with three dedicated LVM drives across three compute or compute-storage nodes.
- Increase the maximum number of pods per node from the default value of 250 to 500. For
the procedure to update the value, see Managing the maximum number of pods per node .
What to do next
- Create the hosted control plane clusters. For the procedure to create, see
Creating a virtualized Hosted Control Plane cluster.
- Deploy virtualized or bare metal clusters. For the procedure to install the
hosted cluster, see Prerequisites for Hosted Control Plane clusters or Installing Backup & Restore on the hosted cluster.