The prerequisites before you create Hosted Control Plane clusters.
Procedure
- From the Red Hat Operator Catalog,
deploy Red Hat
OpenShift Virtualization 4.14.0 or higher.
- Create hyperconverged CR based on the following example:
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
annotations:
deployOVS: 'false'
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec: {}
- Wait for five to ten minutes for the CR status to be available and the OpenShift Container Platform console to refresh and show up a new
Virtualization menu.
-
From the Red Hat Operator Catalog, install
MetalLB 4.14 or higher.
Add the MetalLB Operator to your cluster so that when a service of type
LoadBalancer
is added to the cluster, MetalLB can add a fault-tolerant external IP
address for the service. For the procedure to install and validate MetalLB, see
https://docs.openshift.com/container-platform/4.16/networking/metallb/about-metallb.html.
Note: When you set up the load balancer, other applications can
also get the advertised addresses. You must have enough addresses for any workloads on this cluster
and created OpenShift Container Platform
clusters.
- Create MetalLB CR based on the following example:
apiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
- Create an IPaddress pool.
Note: Reserve a set of unused IPs on the same CIDR as the Bare Metal network of the IBM Storage Fusion HCI System cluster for MetalLB. The
MetalLB serves these IPs to any load-balancer service that is installed on the cluster and not just
the Hosted Control Plane.
Example:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: metallb
namespace: metallb-system
spec:
addresses:
- 9.9.0.51-9.9.0.70
Note: When you set up the load balancer, other applications can also get the advertised addresses.
You must have enough addresses for any workloads on this cluster and created OpenShift Container Platform clusters.
- Create
L2advertisement
based on the following example:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: l2advertisement
namespace: metallb-system
spec:
ipAddressPools:
- metallb
- Run the following command to patch ingress:
oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
- Install Red Hat Advanced Cluster
Management 2.11 from Red Hat Operator
Catalog.
- Check whether the Multi Cluster Engine 2.6 or higher is installed. If it is not installed
already, then install it from the Red Hat
Operator Catalog.
- Create a Multi Cluster Hub CR instance based on the following example:
apiVersion: operator.open-cluster-management.io/v1
kind: MultiClusterHub
metadata:
name: multiclusterhub
namespace: open-cluster-management
spec: {}
Change the parameter values according to your environment.
- Wait until the Multi Cluster Hub instance is created, available, and in Running
state.
The hub must be running and a Multi Cluster Engine instance must be available.
Run the following command to check whether the Multi Cluster Engine instance is
available.
$ oc get multiclusterengine
Example
output:
$ oc get multiclusterengine
NAME STATUS AGE
multiclusterengine Available 170m
- Download hosted control plane CLI from OpenShift Container Platform console:
- Go to the Command Line Tools page.
- From the Hosted Control Plane - Hosted Control
Plane Command Line Interface (CLI) section, download the CLI tar and extract from archive based on
your platform.
Note: In Advanced Cluster Management version 2.10 or higher, you can use the user interface instead
of CLI. However, the CLI gives more options than the user interface.
- On the hub cluster, for hosted cluster etcd pods, use LVM with local drives.
You must configure LVM so that the hosted clusters have a good etcd performance, and
that this requires three dedicated drives on three compute nodes.
Note: Use the
lvms-hcp-etcd
storage class.
The following configmap is auto-created and
used by
IBM Storage Fusion to create an LVM
cluster.
Configmap example:
kind: ConfigMap
apiVersion: v1
metadata:
name: lvm-config
namespace: ibm-spectrum-fusion-ns
data:
skipLVM: 'false'
nodeType: 'compute'
drives: |
- '/dev/disk/by-path/pci-0000:61:00.0-nvme-1'
computeNodes: |
- compute-1-ru5.rackm01.rtp.raleigh.ibm.com
- compute-1-ru6.rackm01.rtp.raleigh.ibm.com
- compute-1-ru7.rackm01.rtp.raleigh.ibm.com
In the example configmap, the
LVMCluster uses the first NVMe drive /dev/disk/by-path/pci-0000:61:00.0-nvme-1
of the first three compute nodes to provide PVC to etcd
pods of the hosted
clusters.
Optionally: You can define a customized LVM configmap before you enable the Fusion Data Foundation service.
- Increase the maximum number of pods per node from the default value of 250 to 500. For
the procedure to update the value, see Managing the maximum number of pods per node .
What to do next
- Create the hosted control plane clusters. For the procedure to create, see Working with Hosted Control Plane on Fusion Data Foundation.
- Install an IBM Storage Fusion hosted cluster and install
Fusion Data Foundation on it.
For the
procedure to install the hosted cluster, see Installing Fusion Data Foundation on the hosted cluster.