Prerequisites for Hosted Control Plane clusters

The prerequisites before you create Hosted Control Plane clusters.

Before you begin

  • The single rack installation of IBM Storage Fusion HCI System with a minimum of three compute nodes.
  • The version of OpenShift® Container Platform must be 4.16. If you are on a lower level of OpenShift Container Platform, upgrade to 4.16. For the procedure, see OpenShift Container Platform upgrade.
  • Install and configure Fusion Data Foundation storage type on the hub cluster.
  • Important: You must have a subscription to Red Hat Advanced Cluster Manager (ACM). It can be purchased as a stand-alone component from IBM or Red Hat. You can also use the ACM that is included in Red Hat OpenShift Platform Plus.
  • If you plan to install Backup & Restore service, ensure that 4 GiB memory is available post the installation.

Procedure

  1. From the Red Hat Operator Catalog, deploy Red Hat OpenShift Virtualization 4.14.0 or higher.
    It is a software component of OpenShift Container Platform to run and manage virtual machine workloads alongside container workloads. For the procedure to install OpenShift Virtualization, see OpenShift virtualization on IBM Storage Fusion HCI System. To validate the installation, see Red Hat Documentation.
  2. Create hyperconverged CR based on the following example:
    
    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
     annotations:
      deployOVS: 'false'
     name: kubevirt-hyperconverged
     namespace: openshift-cnv
    spec: {}
  3. Wait for five to ten minutes for the CR status to be available and the OpenShift Container Platform console to refresh and show up a new Virtualization menu.
  4. From the Red Hat Operator Catalog, install MetalLB 4.14 or higher.
    Add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add a fault-tolerant external IP address for the service. For the procedure to install and validate MetalLB, see https://docs.openshift.com/container-platform/4.16/networking/metallb/about-metallb.html.
    Note: When you set up the load balancer, other applications can also get the advertised addresses. You must have enough addresses for any workloads on this cluster and created OpenShift Container Platform clusters.
  5. Create MetalLB CR based on the following example:
    
    apiVersion: metallb.io/v1beta1
    kind: MetalLB
    metadata:
      name: metallb
      namespace: metallb-system
  6. Create an IPaddress pool.
    Note: Reserve a set of unused IPs on the same CIDR as the Bare Metal network of the IBM Storage Fusion HCI System cluster for MetalLB. The MetalLB serves these IPs to any load-balancer service that is installed on the cluster and not just the Hosted Control Plane.

    Example:

    
    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
      name: metallb
      namespace: metallb-system
    spec:
      addresses:
      - 9.9.0.51-9.9.0.70
    Note: When you set up the load balancer, other applications can also get the advertised addresses. You must have enough addresses for any workloads on this cluster and created OpenShift Container Platform clusters.
  7. Create L2advertisement based on the following example:
    
    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
      name: l2advertisement
      namespace: metallb-system
    spec:
      ipAddressPools:
       - metallb
  8. Run the following command to patch ingress:
    oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
  9. Install Red Hat Advanced Cluster Management 2.11 from Red Hat Operator Catalog.

    For more information about Red Hat Advanced Cluster Management, see Red Hat Advanced Cluster Management for Kubernetes documentation. To validate the installation, see Red Hat Documentation.

  10. Check whether the Multi Cluster Engine 2.6 or higher is installed. If it is not installed already, then install it from the Red Hat Operator Catalog.
    For the procedure to install, see Red Hat Documentation.
  11. Create a Multi Cluster Hub CR instance based on the following example:
    
    apiVersion: operator.open-cluster-management.io/v1
    kind: MultiClusterHub
    metadata:
      name: multiclusterhub
      namespace: open-cluster-management
    spec: {}
      
    Change the parameter values according to your environment.
  12. Wait until the Multi Cluster Hub instance is created, available, and in Running state.
    The hub must be running and a Multi Cluster Engine instance must be available.
    Run the following command to check whether the Multi Cluster Engine instance is available.
    $ oc get multiclusterengine
    Example output:
    $ oc get multiclusterengine
    NAME STATUS AGE
    multiclusterengine Available 170m
  13. Download hosted control plane CLI from OpenShift Container Platform console:
    1. Go to the Command Line Tools page.
    2. From the Hosted Control Plane - Hosted Control Plane Command Line Interface (CLI) section, download the CLI tar and extract from archive based on your platform.
      Note: In Advanced Cluster Management version 2.10 or higher, you can use the user interface instead of CLI. However, the CLI gives more options than the user interface.
  14. On the hub cluster, for hosted cluster etcd pods, use LVM with local drives.
    You must configure LVM so that the hosted clusters have a good etcd performance, and that this requires three dedicated drives on three compute nodes.
    Note: Use the lvms-hcp-etcd storage class.
    The following configmap is auto-created and used by IBM Storage Fusion to create an LVM cluster.
    Configmap example:
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: lvm-config
      namespace: ibm-spectrum-fusion-ns  
    data:
      skipLVM: 'false' 
      nodeType: 'compute'
      drives: |
        - '/dev/disk/by-path/pci-0000:61:00.0-nvme-1'
      computeNodes: |
        - compute-1-ru5.rackm01.rtp.raleigh.ibm.com
        - compute-1-ru6.rackm01.rtp.raleigh.ibm.com
        - compute-1-ru7.rackm01.rtp.raleigh.ibm.com

    In the example configmap, the LVMCluster uses the first NVMe drive /dev/disk/by-path/pci-0000:61:00.0-nvme-1 of the first three compute nodes to provide PVC to etcd pods of the hosted clusters.

    Optionally: You can define a customized LVM configmap before you enable the Fusion Data Foundation service.

    • The skipLVM field sets whether to create LVM cluster. The valid values are true or false. By default, the value is false.
    • The nodeType field sets drives about the node type that you can use for LVM clusters. Only the compute node type is supported.
    • The drives field sets the drives to be used for the LVM cluster. If it is not set, then /dev/disk/by-path/pci-0000:61:00.0-nvme-1 value is used by default. Use one or two drives per node for LVM cluster. Use the following format to specify two drives in a node to be used for LVM.
        drives: |
        - '/dev/disk/by-path/pci-0000:61:00.0-nvme-1'
        - '/dev/disk/by-path/pci-0000:63:00.0-nvme-1'
    • If the nodeType is defined to compute, then the computeNodes field sets drives with the nodes to be used for LVM. If the computeNodes list has less than three compute nodes, it returns an error. If the computeNodes field is not set, then by default the compute-1-ru5, compute-1-ru6, and compute-1-ru7 are used as the LVM nodes.
  15. Increase the maximum number of pods per node from the default value of 250 to 500. For the procedure to update the value, see Managing the maximum number of pods per node .

What to do next

  1. Create the hosted control plane clusters. For the procedure to create, see Working with Hosted Control Plane on Fusion Data Foundation.
  2. Install an IBM Storage Fusion hosted cluster and install Fusion Data Foundation on it.

    For the procedure to install the hosted cluster, see Installing Fusion Data Foundation on the hosted cluster.