Prerequisites for Hosted Control Plane clusters

The prerequisites before you create virtualized or Bare Metal Hosted Control Plane clusters.

Before you begin

  • The single or multi-rack installation of IBM Fusion HCI with a minimum of three compute nodes.
  • The version of OpenShift® Container Platform must be 4.16 or higher.
  • Install and configure Fusion Data Foundation provider mode storage type on the hub cluster.
  • If you plan to install Backup & Restore service, ensure that 12 GiB memory is available post the installation.
  • Verify that you have all the necessary LVM drives before you initiate LVM configuration on the hub cluster. For more information about LVM drives, see Dedicated drives for LVM filesystems.

About this task

Procedure

  1. From the Red Hat® Operator Catalog, install MetalLB 4.16 or higher.
    Add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add a fault-tolerant external IP address for the service. For the procedure to install and validate MetalLB, see https://docs.openshift.com/. Go to your specific version of OpenShift Container Platform and check MetalLB details.
    Note: When you set up the load balancer, other applications can also get the advertised addresses. You must have enough addresses for any workloads on this cluster and created OpenShift Container Platform clusters.
  2. Create MetalLB CR based on the following example:
    apiVersion: metallb.io/v1beta1
    kind: MetalLB
    metadata:
      name: metallb
      namespace: metallb-system
  3. Create an IPaddress pool.
    Note: Reserve a set of unused IPs on the same CIDR as the Bare Metal network of the IBM Fusion HCI cluster for MetalLB. The MetalLB serves these IPs to any load-balancer service that is installed on the cluster and not just the Hosted Control Plane.

    Example:

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: metallb
      namespace: metallb-system
    spec:
      addresses:
      - 9.9.0.51-9.9.0.70
    Note: When you set up the load balancer, other applications can also get the advertised addresses. You must have enough addresses for any workloads on this cluster and created OpenShift Container Platform clusters.
  4. Create L2advertisement based on the following example:
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: l2advertisement
      namespace: metallb-system
    spec:
      ipAddressPools:
       - metallb
  5. Install Multi Cluster Engine for Kubernetes operator.
  6. Create a Multi Cluster Engine CR instance based on the following example:
    apiVersion: multicluster.openshift.io/v1
    kind: MultiClusterEngine
    metadata:
      name: multiclusterengine
    spec:
      targetNamespace: multicluster-engine
    Change the parameter values according to your environment.
  7. Wait until the Multi Cluster Hub instance is created, available, and in Running state.
    The Multi Cluster Engine instance must be available.
    Run the following command to check whether the Multi Cluster Engine instance is available.
    $ oc get multiclusterengine
    Example output:
    $ oc get multiclusterengine
    NAME STATUS AGE
    multiclusterengine Available 170m
  8. Download hosted control plane CLI from OpenShift Container Platform console:
    1. Go to the Command Line Tools page.
    2. From the Hosted Control Plane - Hosted Control Plane Command Line Interface (CLI) section, download the CLI tar and extract from archive based on your platform.
      Note: In MCE, you can use the user interface instead of CLI. However, the CLI gives more options than the user interface.
  9. On the hub cluster, configure LVM by using the local drives for the etcd pods of hosted clusters. To ensure optimal etcd performance, LVM must be set up with three dedicated LVM drives across three compute or compute-storage nodes.
    Note: Use the lvms-hcp-etcd StorageClass.

    For more information about dedicated drives for LVM filesystems, see Dedicated drives for LVM filesystems.

  10. Increase the maximum number of pods per node from the default value of 250 to 500. For the procedure to update the value, see Managing the maximum number of pods per node .

What to do next

  1. Create the hosted control plane clusters. For the procedure to create, see Creating a virtualized Hosted Control Plane cluster.
  2. Deploy virtualized or bare metal clusters. For the procedure to install the hosted cluster, see Prerequisites for Hosted Control Plane clusters or Installing Backup & Restore on the hosted cluster.