Install Red Hat OpenShift Container Platform

Proceed with the installation of the Red Hat OpenShift Container Platform (OCP) cluster.

Prepare the install-config.yaml

  1. Create an installation_directory in your bastion and inside create a file called install-config.yaml.
    apiVersion: v1
    baseDomain: sa.boe   // base domain of the cluster
    compute: 
    - hyperthreading: Enabled 
      name: worker
      replicas: 0 
      architecture: s390x
    controlPlane: 
      hyperthreading: Enabled 
      name: master
      replicas: 3 
      architecture: s390x
    metadata:
      name: ocp0 // Cluster name specified in your DNS records.
    networking:
      clusterNetwork:
      - cidr: 10.132.0.0/14  // Block of IP address from which pod IPs are allocated.
        hostPrefix: 23 
      networkType: OVNKubernetes 
      serviceNetwork: 
      - 100.0.0.0/16
    platform:
      none: {} 
    fips: false 
    pullSecret: '{"auths": ...}' // Pull secret from Red Hat Cluster Manager.
    sshKey: 'ssh-xxxx xxxxxx' // Generated SSH key from the Bastion setup.

    For more information about the parameters, refer to Sample install-config.yaml file for IBM Z.

  2. Create a directory called installation_files under /var/www/html and generate the manifest file.
    [bastion installation_director]# ./openshift-install create manifests --dir /var/www/html/installation_files
    INFO Consuming Install Config from target directory
    INFO Manifests created in: /var/www/html/installation_files/manifests and /var/www/html/installation_files/openshift
  3. Change the mastersSchedulable value to false for multi-LPAR installations, and leave it as true for a three-node setup. The file can be found under /var/www/html/ignition/manifests.
    [bastion manifests]# vi cluster-scheduler-02-config.yml 
    apiVersion: config.openshift.io/v1
    kind: Scheduler
    metadata:
      creationTimestamp: null
      name: cluster
    spec:
      mastersSchedulable: false  // change to false from true
      policy:
        name: ""
      profileCustomizations:
        dynamicResourceAllocation: ""
    status: {}
  4. Create the ignition file.
    [bastion ocp_package]# ./openshift-install create ignition-configs --dir /var/www/html/installation_files
    INFO Consuming Master Machines from target directory
    INFO Consuming Common Manifests from target directory
    INFO Consuming Worker Machines from target directory
    INFO Consuming Openshift Manifests from target directory
    INFO Consuming OpenShift Install (Manifests) from target directory
    INFO Ignition-Configs created in: /var/www/html/installation_files and /var/www/html/installation_files/auth
    ´

In the installation_files directory, you will find bootstrap.ign, master.ign, metadata.ign, worker.ign, and the auth directory containing kubeadmin-password and kubeconfig.

Note: Ensure that the installation_files directory has the same permissions as the html directory.

Obtain the CoreOS (RHCOS) kernel, initramfs, rootfs and initrd.addrsize files

Except for initrd.addrsize, you can download the remaining files from the RHCOS images mirror.
    kernel: rhcos-<version>-live-kernel-<architecture>  
    initramfs: rhcos-<version>-live-initramfs.<architecture>.img  
    rootfs: rhcos-<version>-live-rootfs.<architecture>.img  

The easiest way to obtain initrd.addrsize is by mounting the ISO image rhcos-version-s390x-live.s390x.iso, where it can be found under the images directory. After you have these files, move them to your HTTP server (bastion), as they are used in the upcoming installation.

Note: During the ISO mount, you will also find kernel, initramfs, and rootfs under the images directory, which can also be used for the installation.

Create parameter file

Create the parameter files that are used to set up the nodes (compute, control, and bootstrap). These files include information about the ignition files created in the previous steps, as well as the node's IP address, storage details, nameserver, and domain. For a multi-node LPAR setup, you need one bootstrap, three control, and two compute parameter files. Use the sample below to create them.

Note: Errors can disrupt the installation if the parameter files are not created correctly. To prevent errors, it is recommended to keep all the parameters on the same line
Sample parameter file:
rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://bastion:50000/rootfs.img coreos.inst.ignition_url=http://bastion:50000/var/www/html/installation_files/bootstrap.ign ip=LPAR-IP::Gateway:subnetmask:hostname_for_node:network_interface:none nameserver=bastion_ip domain=domain name cio_ignore=all,!condev zfcp.allow_lun_scan=0 rd.znet=networkcard_details rd.dasd=DASD_device
Sample bootstrap file:
rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://bastion:50000/rootfs.img coreos.inst.ignition_url=http://bastion:50000/ignition/bootstrap.ign ip=100.0.0.11::xx.xx.xx.xx:xx.xx.xx.xx:bootstrap:xxxxx:none nameserver=xx.xx.xx.xx domain=xxx cio_ignore=all,!condev zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bd00,0.0.bd01,0.0.bd02,layer2=1,portno=0 rd.dasd=0.0.1584
Sample control file:
cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.inst.ignition_url=http://bastion:50000/ignition/master.ign coreos.live.rootfs_url=http://bastion:50000/rootfs.img ip=100.0.0.20::xx.xx.xx.xx:xx.xx.xx.xx:control0:xxxxx:none nameserver=xx.xx.xx.xx domain=xxx rd.znet=qeth,0.0.bd00,0.0.bd01,0.0.bd02,layer2=1,portno=0 rd.dasd=0.0.1588 zfcp.allow_lun_scan=0
Sample compute file:
cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.inst.ignition_url=http://bastion:50000/ignition/worker.ign coreos.live.rootfs_url=http://bastion:50000/rootfs.img ip=100.0.0.50::xx.xx.xx.xx:xx.xx.xx.xx:compute0:xxxxx:none nameserver=xx.xx.xx.xx domain=xxx rd.znet=qeth,0.0.bd00,0.0.bd01,0.0.bd02,layer2=1,portno=0 rd.dasd=0.0.1514 zfcp.allow_lun_scan=0

Create the generic.ins file

Create the generic.ins file, which is used for the installation. This file contains information about the parameter files and RHCOS details from Obtain the CoreOS (RHCOS) kernel, initramfs, rootfs and initrd.addrsize files and Create parameter file. Ensure that it is placed on your FTP server so that the Hardware Management Console (HMC) can access it. Additionally, move the kernel, initrd, parm file, and initrd.addrsize to your FTP server.

Sample generic.ins file:
images/kernel.img 0x00000000
images/initrd.img 0x02000000
parm_files/bootstrap.prm 0x00010480
images/initrd.addrsize 0x00010408

The above step is repeated for each node (bootstrap, control, and compute) setup using HMC. Ensure you use the correct parameter file for each node.

Use HMC to install OCP

Use HMC to install OCP

  1. Log in to the IBM Hardware Management Console (HMC) portal. First, select the bootstrap LPAR by navigating to Systems Management → LPAR (Partitions) → Load from Removable Media or Server.

    Select the bootstrap LPAR

  2. Enter the details of the FTP server and specify the file path to the generic.ins file for the bootstrap.
    FTP server details fields
  3. Select the correct generic.ins, click OK. Enter your HMC password, click Yes to start the boot process. Wait for the load process to complete successfully.
  4. Go to Operating System Messages to view the boot logs.

    Menu actions to get to boot logs

    The boot process takes some time, after the login message appears, your machine is ready.alt text

  5. For each machine added to the cluster, multiple certificate signing requests (CSRs) need to be approved. Approve the pending certificates from the bastion and repeat the process until there are no pending certificates remaining.
    oc get csr -o name | xargs oc adm certificate approve
  6. Important: Repeat the above process for all control and compute nodes. Ensure that the LPARs are booted in the following order.
    1. Bootstrap
    2. All control nodes
    3. All compute nodes

Verify the installation

Verify that the OCP installation was successful..
  1. Make sure all the certificates are approved, by running oc get csr. Approve any pending certificates by running:
    oc get csr -o name | xargs oc adm certificate approve
  2. Check the status of the nodes and ensure that all nodes are in the Ready state.
    [bastion]# oc get nodes
    NAME       STATUS   ROLES                  AGE   VERSION
    compute0   Ready    worker                 44m   v1.30.7
    compute1   Ready    worker                 44m   v1.30.7
    control0   Ready    control-plane,master   20h   v1.30.7
    control1   Ready    control-plane,master   18h   v1.30.7
    control2   Ready    control-plane,master   44m   v1.30.7
  3. Verify the cluster status and ensure that every cluster shows True under Available. This may take a few minutes, wait until every cluster changes to True.
    [bastion]#  oc get clusteroperators
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    authentication                             4.17.12   True        False         False      37m
    baremetal                                  4.17.12   True        False         False      20h
    cloud-controller-manager                   4.17.12   True        False         False      20h
    cloud-credential                           4.17.12   True        False         False      23h
    cluster-autoscaler                         4.17.12   True        False         False      20h
    config-operator                            4.17.12   True        False         False      20h
    console                                    4.17.12   True        False         False      43m
    control-plane-machine-set                  4.17.12   True        False         False      20h
    csi-snapshot-controller                    4.17.12   True        False         False      20h
    dns                                        4.17.12   True        False         False      20h
    etcd                                       4.17.12   True        False         False      20h
    image-registry                             4.17.12   True        False         False      45m
    ingress                                    4.17.12   True        False         False      46m
    insights                                   4.17.12   True        False         False      20h
    kube-apiserver                             4.17.12   True        False         False      20h
    kube-controller-manager                    4.17.12   True        False         False      20h
    kube-scheduler                             4.17.12   True        False         False      20h
    kube-storage-version-migrator              4.17.12   True        False         False      20h
    machine-api                                4.17.12   True        False         False      20h
    machine-approver                           4.17.12   True        False         False      20h
    machine-config                             4.17.12   True        False         False      20h
    marketplace                                4.17.12   True        False         False      20h
    monitoring                                 4.17.12   True        False         False      41m
    network                                    4.17.12   True        False         False      18h
    node-tuning                                4.17.12   True        False         False      49m
    openshift-apiserver                        4.17.12   True        False         False      46m
    openshift-controller-manager               4.17.12   True        False         False      20h
    openshift-samples                          4.17.12   True        False         False      45m
    operator-lifecycle-manager                 4.17.12   True        False         False      20h
    operator-lifecycle-manager-catalog         4.17.12   True        False         False      20h
    operator-lifecycle-manager-packageserver   4.17.12   True        False         False      18h
    service-ca                                 4.17.12   True        False         False      20h
    storage                                    4.17.12   True        False         False      20h
  4. Check the OCP cluster version.
    [bastion]# oc get clusterversion
    NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
    version   4.17.12   True        False         37m     Cluster version is 4.17.12
  5. Check the overall bootstrap status. When you see the message It is now safe to remove the bootstrap resources, you can remove the bootstrap. Delete the HAProxy and DNS record. The bootstrap can be used as an additional compute node if needed.
    [bastion ocp_package]# ./openshift-install --dir /var/www/html/ignition/ wait-for bootstrap-complete --log-level=info
    INFO Waiting up to 20m0s (until 11:18AM CET) for the Kubernetes API at https://api.ocp0.sa.boe:6443...
    INFO API v1.30.7 up
    INFO Waiting up to 45m0s (until 11:43AM CET) for bootstrapping to complete...
    INFO It is now safe to remove the bootstrap resources
    INFO Time elapsed: 0s
  6. The final step is to retrieve the OCP cluster URL using the oc command below. For the password, check the ignition folder on the bastion; it is located under /auth/kubeadmin-password.
    [bastion]# oc whoami --show-console
    https://console-openshift-console.apps.ocp0.sa.boe