Deploying Bare Metal clusters with Fusion Data Foundation

You can create a Hosted Control Plane cluster by using Bare Metal hosts either through the OpenShift® Console or the Hosted Control Plane CLI.

Before you begin

Procedure

  1. Go to ACM user interface.
  2. Go to Infrastructure > Clusters > Create cluster.
  3. Select Host Inventory and then Hosted.
  4. Enter the credentials.
    The credentials should include the following:
    • cloud.openshift.com
    • cp.icr.io
    • quay.io
    • registry.connect.redhat.com
    • registry.redhat.io
  5. Enter Cluster name.
    Important: It is crucial that the cluster name is the same name used in the network planning. Do not use a name that does not have an alias entry. The cluster is a part of the base domain. For example, mydomain.com. Controller and Infrastructure availability are environment dependent.

    Though the recommendation is to have 3 nodes for a resilient cluster, you can have a single node Hosted Control Plane cluster.

  6. Enter Namespace. It is the namespace of the infrastructure environment of the hosts.
  7. Enter Labels. These are the labels found on hosts in this particular infrastructure environment. To ensure certain hosts are in a particular cluster, use the labels. Otherwise, a random host from that infrastructure environment gets chosen.
  8. Enter the Networking type.
    The value is environment dependent, but you can choose LoadBalancer type for resilience. The SSH public key must be the same that you used during host import.
  9. Open the YAML and add the following in the spec section: HCP cluster to use the specially created Local Volume Storage for HCP
    
    spec:
      etcd:
        managed:
          storage:
            persistentVolume:
              size: 8Gi
              storageClassName: lvms-hcp-etcd
            type: PersistentVolume
        managementType: Managed

    It causes the Hosted Control Plane cluster to use the Local Volume Storage created for it.

  10. Click Finish.
    The cluster creation starts and it can take up to half an hour. After the nodepool hosts are in the ready state, add the load balancer for the console to be available.
  11. If you choose LoadBalancer networking type, then add a load balancer to the newly created Hosted Control Plane cluster.
    You need the load balancer to gain external access to the Hosted Control Plane cluster. Using a load balancer allows the nodes to be resilient as opposed to the nodeport approach. The load balancer gets added to the newly created Hosted Control Plane cluster, and not the IBM Fusion HCI System hub cluster.
    1. To access the new Hosted Control Plane cluster, download the kubeconfig.
      Steps to download the kubeconfig:
      1. Log in to the IBM Fusion HCI System hub OpenShift console.
      2. Go to the ACM user interface and select Infrastructre > Clusters.
      3. In the clusters list, select the newly created Hosted Control Plane cluster.
      4. In the Cluster nodepools section, click Download kubeconfig. It downloads the kubeconfig for the cluster.

        After the kubeconfig is available, use the OC commands to create YAMLs on the new Hosted Control Plane cluster.

    2. Create metallb operator.
      1. Create a new YAML file.Example YAML:
        
        apiVersion: v1
        Kind: Namespace
        metadata:
              name: metallb
              labels: 
                  openshift.io/cluster-monitoring:  “true”
      
              annotations: 
          
                 workload.openshift.io/allowed: management

        ——
        apiVersion: operators.coreos.com/v1

        kind: OperatorGroup

        metadata:
     
           name: metallb-operator-operatorgroup
      
           namespace: metallb

        —
        apiVersion: operators.coreos.com/v1alpha1

        kind: Subscription

        metadata:
      
           name: metallb-operator
      
           namespace: metal

        spec:
    
          channel: “stable”
    
          name: metallb-operator
     
          source: redhat-operators 
     
          sourceNamespace: openshift-marketplace
      2. Apply the YAML:
        oc --kubeconfig=kubeconfig.yaml apply -f metallb-operator-config.yaml
      3. Wait for all the pods to be up and running.
        oc --kubeconfig=kubeconfig.yaml get pods -n metallb
    3. Create an instance of metallb
      1. Create a new file metallb-instance-config.yaml.Example:
        
        apiVersion: metallb.io/v1beta1
        kind:  metallb
        metadata:
           name: metallb
           namespace:  metallb
      2. Apply the file:
        oc --kubeconfig=kubeconfig.yaml apply -f metallb-instance-config.yaml
      3. Wait for the pods to be up and running
        oc --kubeconfig=kubeconfig.yaml get pods -n metallb
    4. Create an IPAddressPool and L2Advertisement.
      1. Create an IPAddressPool named ipaddresspool-l2advertisement-config.yaml.
        Example:
        
        apiVersion:  metallb.io/v1beta1
        kind:  IPAddressPool
        metadata: 
           name:  hcpip
           namespace:  metallb
        spec:
          protocol: layer2
          autoAssign:  false
          addresses: 
            - 1.23.45.678-1.23.45.678
        ---
        apiVersion: metallb.io/v1beta1
        kind:  L2Advertisement
        metadata:
           name:  hcpip
           namespace:  metallb
        spec:
           ipAddressPolls: 
             -  hcpip

        The name of the IPAddressPool must match the one in the L2Advertisement.

        The addresses must be the IPAddress in the DNS that is the *.apps.NAMEofCluster in the DNS network table set up by the network administrator.

      2. Apply the ipaddresspool-l2advertisement-config.yaml.
        oc --kubeconfig=kubeconfig.yaml apply -f ipaddresspool-l2advertisement-config.yaml
    5. Create a service for the loadbalancer.
      1. Create metallb-loadbalancer-service.yaml.
        Note: The address-pool must match the name of the addresspool in previous step.
        Example:
        
        kind:  Service
        apiVersion:  v1
        metadata:
          annotations:
            metallb.universe.tf/address-pool:hcpip
          name:metallb-ingress
          namespace:openshift-ingress
        spec:
          ports:
           - name:http
           protocol:TCP
           port:80
           targetPort:80
           - name:https
           protocol:TCP
           port:443
           targetPort:443
         selector: 
           ingresscontroller.operator.openshift.io/deployment-ingresscontroller:default
           type:LoadBalancer
      2. Apply the load balancer service YAML.
        oc --kubeconfig=kubeconfig.yaml apply -f metallb/metallb-loadbalancer-service.yaml
      3. Monitor the Cluster Operator console for issues:
        oc --kubeconfig=kubeconfig.yaml get co