OCP and OCS upgrade in a connected environment by using Red Hat OpenShift console UI

This section explains how to upgrade from Red Hat® OpenShift® Container Platform (OCP) 4.8 to 4.10 on Cloud Pak for Data System version 2.0.2.1 with houseconfig setup.

Before you begin

Make sure that:

  • Cloud Pak for Data System version 2.0.2 is configured with houseconfig setup to access external network.
  • The cluster is in healthy state by running the following command.
    oc get nodes
    You can see the following status information (all nodes with Ready status).
    [root@gt36-node1 ~]# oc get nodes
    NAME                STATUS   ROLES           AGE     VERSION
    e1n1-master.fbond   Ready    master,worker   7d23h   v1.21.8+ee73ea2
    e2n1-master.fbond   Ready    master,worker   7d23h   v1.21.8+ee73ea2
    e3n1-master.fbond   Ready    master,worker   7d23h   v1.21.8+ee73ea2
    e4n1.fbond          Ready    worker          7d23h   v1.21.8+ee73ea2
    e5n1.fbond          Ready    worker          7d23h   v1.21.8+ee73ea2
    e6n1.fbond          Ready    worker          7d23h   v1.21.8+ee73ea2
    
  • The machine config pools (MCP) are up to date by running the following command.
    oc get mcp
    You can see the following MCP information.
    [root@gt36-node1 ~]# oc get mcp
    NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    master   rendered-master-ab51dd1d87cd414aab0dd461fe9f4801   True      False      False      3              3                   3                     0                      7d23h
    unset    rendered-unset-9ae92d3a65b883c521d2c2a33960af69    True      False      False      0              0                   0                     0                      7d23h
    worker   rendered-worker-9ae92d3a65b883c521d2c2a33960af69   True      False      False      3              3                   3                     0                      7d23h
    
  • All cluster operators are in healthy state by running the following command.
    oc get co
    You can see the health status of cluster operators.
    [root@gt36-node1 ~]# oc get co
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.8.37    True        False         False      2d18h
    baremetal                                  4.8.37    True        False         False      3d17h
    cloud-credential                           4.8.37    True        False         False      7d23h
    cluster-autoscaler                         4.8.37    True        False         False      7d23h
    config-operator                            4.8.37    True        False         False      7d23h
    console                                    4.8.37    True        False         False      2d9h
    csi-snapshot-controller                    4.8.37    True        False         False      3d15h
    dns                                        4.8.37    True        False         False      3d12h
    etcd                                       4.8.37    True        False         False      7d23h
    image-registry                             4.8.37    True        False         False      2d22h
    ingress                                    4.8.37    True        False         False      2d9h
    insights                                   4.8.37    True        False         False      7d23h
    kube-apiserver                             4.8.37    True        False         False      7d23h
    kube-controller-manager                    4.8.37    True        False         False      7d23h
    kube-scheduler                             4.8.37    True        False         False      7d23h
    kube-storage-version-migrator              4.8.37    True        False         False      3d10h
    machine-api                                4.8.37    True        False         False      7d23h
    machine-approver                           4.8.37    True        False         False      7d23h
    machine-config                             4.8.37    True        False         False      3d10h
    marketplace                                4.8.37    True        False         False      3d16h
    monitoring                                 4.8.37    True        False         False      3d11h
    network                                    4.8.37    True        False         False      7d23h
    node-tuning                                4.8.37    True        False         False      2d9h
    openshift-apiserver                        4.8.37    True        False         False      2d18h
    openshift-controller-manager               4.8.37    True        False         False      7d23h
    openshift-samples                          4.8.37    True        False         False      3d12h
    operator-lifecycle-manager                 4.8.37    True        False         False      7d23h
    operator-lifecycle-manager-catalog         4.8.37    True        False         False      7d23h
    operator-lifecycle-manager-packageserver   4.8.37    True        False         False      3d15h
    service-ca                                 4.8.37    True        False         False      7d23h
    storage                                    4.8.37    True        False         False      7d23h
    
  • OpenShift Container Storage (OCS) ceph status is HEALTH_OK by running the following command.
    oc -n openshift-storage rsh `oc get pods -n openshift-storage | grep ceph-tool | cut -d ' ' -f1` ceph status
    You can see the OCS health status.
    [root@gt36-node1 ~]# oc -n openshift-storage rsh `oc get pods -n openshift-storage | grep ceph-tool | cut -d ' ' -f1` ceph status
      cluster:
        id:     3bc56e8e-c031-48dc-b169-7d29008ab07e
        health: HEALTH_OK
    
Note: All the commands that are mentioned here are to be run from e1n1 except where it mentions otherwise.

Procedure

  1. Set up your Red Hat account and link the Red Hat entitlement to your account. For help, see Accessing Red Hat entitlements from your IBM Cloud Pak.
  2. Obtain the pull secret file with Red Hat credentials from Red Hat OpenShift cluster manager and saved as pull-secret.json.
  3. Validate the external connectivity and Red Hat credentials by running:
    podman pull --authfile /root/pull-secret.json registry.redhat.io/openshift4/ose-local-storage-mustgather-rhel8
    The expected output:
    
    [root@gt36-node1 ~]# podman pull --authfile /root/pull-secret.json registry.redhat.io/openshift4/ose-local-storage-mustgather-rhel8
    Trying to pull registry.redhat.io/openshift4/ose-local-storage-mustgather-rhel8:latest...
    Getting image source signatures
    Checking if image destination supports signatures
    Copying blob 550b4cc31921 done
    Copying blob d8190195889e done
    Copying blob f0f4937bc70f done
    Copying blob 97da74cc6d8f done
    Copying blob 833de2b0ccff done
    Copying blob 07a17b829f30 done
    Copying config 6f6a8c2be0 done
    Writing manifest to image destination
    Storing signatures
    6f6a8c2be07dd54a1506241a605d87f46ebd1bee1add316829c87c340210aee0
    
  4. Update the global cluster pull secret file to authenticate on Red Hat registries.
    1. Retrieve the current cluster pull secret file by running:
      oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > pull_secret_old
      The following information appears.
      [root@gt36-node1 ~]# oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > pull_secret_old
      [root@gt36-node1 ~]# cat pull_secret_old
      {"auths": {"hub.fbond:5000": {"auth": "b2NhZG1pbjpvY2FkbWlu","email": "root@hub.fbond"}}}[root@gt36-node1 ~]#
      
    2. Merge this content into the pull-secret.json and use the merged file to set the global pull-secret on the cluster by running:
      oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret.json
      The following confirmation appears.
      [root@gt36-node1 ~]# oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret.json
      secret/pull-secret data updated
      
    For more information, see OpenShift help on managing images.
  5. Enable the default catalog sources to access the latest from Red Hat operator sources by running:
    oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": false}]'
    The following confirmation appears.
    [root@gt36-node1 ~]# oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": false}]'
    operatorhub.config.openshift.io/cluster patched
    
  6. Make sure that the operator pods are in running state under OpenShift-marketplace namespace by running:
    oc get pods -n openshift-marketplace
    The expected output with the status information:
    NAME                                                              READY   STATUS      RESTARTS   AGE
    33e0b5bfab49b1802c8543c13593c4ab6e56c087bf00159a8bd5859141rgclk   0/1     Completed   0          3d11h
    ap-keepalived-operator-index-4z259                                1/1     Running     8          3d11h
    ap-nettools-operator-index-7pnd9                                  1/1     Running     6          3d6h
    ap-storage-operators-f2hp7                                        1/1     Running     0          2d10h
    appgum-operators-gwgws                                            1/1     Running     6          3d6h
    certified-operators-x6kbj                                         1/1     Running     0          49s
    community-operators-54kq4                                         1/1     Running     0          48s
    cpd-platform-bgd4n                                                1/1     Running     8          3d11h
    custom-redhat-operators-kw9pq                                     1/1     Running     1          2d19h
    d5d6cba0f745806d76d87f36482c281b250abd2eff473959d55d606b40n9kxg   0/1     Completed   0          3d11h
    hardware-pulse-operators-zs9wd                                    1/1     Running     0          2d10h
    magneto-operators-6cxkh                                           1/1     Running     0          2d10h
    marketplace-operator-6d4984967b-sqn5r                             1/1     Running     8          3d11h
    nodeos-operator-index-2w2mp                                       1/1     Running     6          3d6h
    opencloud-operators-jd7v2                                         1/1     Running     8          3d11h
    redhat-marketplace-t5tt2                                          1/1     Running     0          48s
    redhat-operators-7rnnx                                            1/1     Running     0          49s
    rh-cluster-logging-w5nnq                                          1/1     Running     8          3d11h
    yosemite-operator-catalog-qlwl7                                   1/1     Running     2          2d23h
    

Acknowledging manually for upgrading to OpenShift Container Platform (OCP) 4.9

Upgrading to an OCP version higher than 4.8 requires manual acknowledgment from the administrator. For more information, see Preparing to upgrade to OpenShift Container Platform 4.9.

[root@gt36-node1 ~]# oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.8-kube-1.22-api-removals-in-4.9":"true"}}' --type=merge
configmap/admin-acks patched

Accessing Red Hat OpenShift console

Starting Red Hat OpenShift console from the system web console requires extra configuration. To start the console, use the following workaround.

Workaround:
  1. In the website URL, replace localcluster.fbond with the customer FQDN and access the link. For example, Modify:
    https://oauth-openshift.apps.localcluster.fbond/oauth/authorize?client_id=console&redirect_uri=https%3A%2F%2Fopenshift-console.gt23-app.rtp.raleigh.ibm.com%2Fauth%2Fcallback&response_type=code&scope=user%3Afull&state=7ac58018

    To

    https://oauth-openshift.apps.gt23-app.rtp.raleigh.ibm.com/oauth/authorize?client_id=console&redirect_uri=https%3A%2F%2Fopenshift-console.gt23-app.rtp.raleigh.ibm.com%2Fauth%2Fcallback&response_type=code&scope=user%3Afull&state=7ac58018
  2. Select kubeadmin as the authentication method on the OCP console login page.
  3. Retrieve the password by using the following.
    cat /opt/ibm/appliance/platform/xcat/config_files/coreos/.kadm/kubeadmin-password
  4. Use kubeadmin as username and the retrieved password to login.

Upgrade to OCP 4.9 using Red Hat OpenShift web console

Procedure

  1. From the Red Hat OpenShift web console, go to AdministrationCluster Settings.
  2. Edit the Channel information as stable-4.9.
  3. Click Update to update the channel.
    For more information, see Updating a cluster using the web console.

Upgrade OCS to Red Hat OpenShift Data Foundation (ODF) 4.9

Before you begin

Update the current custom catalog source of the ocs-operator and local-storage-operator to use redhat-operators as shown.
oc patch subscription local-storage-operator -n local-storage --type json --patch '[{"op": "replace", "path": "/spec/source", "value": "redhat-operators" }]'
oc patch subscription ocs-operator -n openshift-storage --type json --patch '[{"op": "replace", "path": "/spec/source", "value": "redhat-operators" }]'

Procedure

  1. On the Red Hat OpenShift web console, go to OperatorHub.
  2. Search for OpenShift Data Foundation using the Filter by keyword box and click the OpenShift Data Foundation tile.
  3. Click Install; the Install Operator page appears.
  4. On the Install Operator page, click Install. Wait for the Operator installation to complete.

Upgrade the local-storage component to 4.9

You must upgrade the local-storage component to 4.9 after the completing ODF 4.9 installation.

Procedure

  1. Go to the installed operators under local-storage namespace and click local-storage operator.
  2. Go to the Subscription tab and update the channel to 4.9.
  3. Follow the verification steps mentioned in Updating Red Hat OpenShift Container Storage 4.8 to Red Hat OpenShift Data Foundation 4.9.
    The following is the output on the local-storage namespace after upgrading to 4.9.
    [root@gt36-node1 ~]# oc get sub -n local-storage
    NAME                     PACKAGE                  SOURCE             CHANNEL
    local-storage-operator   local-storage-operator   redhat-operators   4.9
    
    [root@gt36-node1 ~]# oc get csv -n local-storage | grep local-storage
    local-storage-operator.4.9.0-202212051626   Local Storage                      4.9.0-202212051626   local-storage-operator.4.8.0-202201210133   Succeeded
    
    The following is the OpenShift storage output.
    [root@gt36-node1 ~]# oc get sub -n openshift-storage
    NAME           PACKAGE        SOURCE             CHANNEL
    mcg-operator   mcg-operator   redhat-operators   stable-4.9
    ocs-operator   ocs-operator   redhat-operators   stable-4.9
    odf-operator   odf-operator   redhat-operators   stable-4.9
    
    [root@gt36-node1 ~]# oc get csv -n openshift-storage
    NAME                              DISPLAY                            VERSION    REPLACES                                    PHASE
    elasticsearch-operator.5.3.4-13   OpenShift Elasticsearch Operator   5.3.4-13   elasticsearch-operator.4.6.0-202110121348   Succeeded
    mcg-operator.v4.9.13              NooBaa Operator                    4.9.13     mcg-operator.v4.9.12                        Succeeded
    ocs-operator.v4.9.13              OpenShift Container Storage        4.9.13     ocs-operator.v4.8.17                        Succeeded
    odf-operator.v4.9.13              OpenShift Data Foundation          4.9.13     odf-operator.v4.9.12                        Succeeded
    yosemite-operator.v1.0.1          IBM Cloud Pak for Data System      1.0.1      yosemite-operator.v1.0.0                    Succeeded
    

Upgrade to OCP 4.10 using Red Hat OpenShift web console

Procedure

  1. From the Red Hat OpenShift web console, go to AdministrationCluster Settings.
  2. Edit the Channel information as stable-4.10.
  3. Click Update to update the channel.
    For more information, see Updating a cluster using the web console.

Upgrade OCS to Red Hat OpenShift Data Foundation 4.10

Procedure

  1. On the Red Hat OpenShift web console, go to OperatorsInstalled Operators.
  2. Select openshift-storage project.
  3. Click the OpenShift Data Foundation operator name.
  4. Click the Subscription tab and click the link under Update Channel.
  5. Select the Stable-4.10, update channel, and Save it.
  6. If the Upgrade status shows requires approval, click requires approval.
    1. On the Install Plan Details page, click Preview Install Plan.
    2. Review the install plan and click Approve.

    Wait for the Status to change from Unknown to Created.

  7. Go to OperatorsInstalled Operators.
  8. Select the openshift-storage project.
    Wait for the OpenShift data foundation operator Status to change to Up to date.

    Verification steps:

    1. Check the Version below the OpenShift data foundation name and check the operator status.
      1. Go to OperatorsInstalled Operators and select the openshift-storage project.
      2. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick.
    2. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient.

Upgrade the local-storage component to 4.10

You must upgrade the local-storage component to 4.10 after the completing ODF 4.10 installation.

Procedure

  1. On the Red Hat OpenShift web console, go to OperatorsInstalled Operators.
  2. Select local-storage project.
  3. Click the local storage operator name.
  4. Click the Subscription tab and click the link under Update Channel.
  5. Update channel to 4.10 and Save it.
    Wait for the operator Status to change to Up to date.