OCP and OCS upgrade in a connected environment by using Red Hat OpenShift console UI
This section explains how to upgrade from Red Hat®
OpenShift® Container Platform (OCP) 4.8 to 4.10 on
Cloud Pak for Data System version 2.0.2.1 with
houseconfig
setup.
Before you begin
Make sure that:
- Cloud Pak for Data System version 2.0.2 is configured with
houseconfig
setup to access external network. - The cluster is in healthy state by running the following
command.
You can see the following status information (all nodes withoc get nodes
Ready
status).[root@gt36-node1 ~]# oc get nodes NAME STATUS ROLES AGE VERSION e1n1-master.fbond Ready master,worker 7d23h v1.21.8+ee73ea2 e2n1-master.fbond Ready master,worker 7d23h v1.21.8+ee73ea2 e3n1-master.fbond Ready master,worker 7d23h v1.21.8+ee73ea2 e4n1.fbond Ready worker 7d23h v1.21.8+ee73ea2 e5n1.fbond Ready worker 7d23h v1.21.8+ee73ea2 e6n1.fbond Ready worker 7d23h v1.21.8+ee73ea2
- The machine config pools (MCP) are up to date by running the following
command.
You can see the following MCP information.oc get mcp
[root@gt36-node1 ~]# oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-ab51dd1d87cd414aab0dd461fe9f4801 True False False 3 3 3 0 7d23h unset rendered-unset-9ae92d3a65b883c521d2c2a33960af69 True False False 0 0 0 0 7d23h worker rendered-worker-9ae92d3a65b883c521d2c2a33960af69 True False False 3 3 3 0 7d23h
- All cluster operators are in healthy state by running the following
command.
You can see the health status of cluster operators.oc get co
[root@gt36-node1 ~]# oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.37 True False False 2d18h baremetal 4.8.37 True False False 3d17h cloud-credential 4.8.37 True False False 7d23h cluster-autoscaler 4.8.37 True False False 7d23h config-operator 4.8.37 True False False 7d23h console 4.8.37 True False False 2d9h csi-snapshot-controller 4.8.37 True False False 3d15h dns 4.8.37 True False False 3d12h etcd 4.8.37 True False False 7d23h image-registry 4.8.37 True False False 2d22h ingress 4.8.37 True False False 2d9h insights 4.8.37 True False False 7d23h kube-apiserver 4.8.37 True False False 7d23h kube-controller-manager 4.8.37 True False False 7d23h kube-scheduler 4.8.37 True False False 7d23h kube-storage-version-migrator 4.8.37 True False False 3d10h machine-api 4.8.37 True False False 7d23h machine-approver 4.8.37 True False False 7d23h machine-config 4.8.37 True False False 3d10h marketplace 4.8.37 True False False 3d16h monitoring 4.8.37 True False False 3d11h network 4.8.37 True False False 7d23h node-tuning 4.8.37 True False False 2d9h openshift-apiserver 4.8.37 True False False 2d18h openshift-controller-manager 4.8.37 True False False 7d23h openshift-samples 4.8.37 True False False 3d12h operator-lifecycle-manager 4.8.37 True False False 7d23h operator-lifecycle-manager-catalog 4.8.37 True False False 7d23h operator-lifecycle-manager-packageserver 4.8.37 True False False 3d15h service-ca 4.8.37 True False False 7d23h storage 4.8.37 True False False 7d23h
- OpenShift Container Storage (OCS)
ceph
status isHEALTH_OK
by running the following command.
You can see the OCS health status.oc -n openshift-storage rsh `oc get pods -n openshift-storage | grep ceph-tool | cut -d ' ' -f1` ceph status
[root@gt36-node1 ~]# oc -n openshift-storage rsh `oc get pods -n openshift-storage | grep ceph-tool | cut -d ' ' -f1` ceph status cluster: id: 3bc56e8e-c031-48dc-b169-7d29008ab07e health: HEALTH_OK
Note: All the commands that are mentioned here are to be run from
e1n1
except where
it mentions otherwise.Procedure
Acknowledging manually for upgrading to OpenShift Container Platform (OCP) 4.9
Upgrading to an OCP version higher than 4.8 requires manual acknowledgment from the administrator. For more information, see Preparing to upgrade to OpenShift Container Platform 4.9.
[root@gt36-node1 ~]# oc -n openshift-config patch cm admin-acks --patch '{"data":{"ack-4.8-kube-1.22-api-removals-in-4.9":"true"}}' --type=merge
configmap/admin-acks patched
Accessing Red Hat OpenShift console
Starting Red Hat OpenShift console from the system web console requires extra configuration. To start the console, use the following workaround.
Workaround:
- In the website URL, replace
localcluster.fbond
with the customer FQDN and access the link. For example, Modify:https://oauth-openshift.apps.localcluster.fbond/oauth/authorize?client_id=console&redirect_uri=https%3A%2F%2Fopenshift-console.gt23-app.rtp.raleigh.ibm.com%2Fauth%2Fcallback&response_type=code&scope=user%3Afull&state=7ac58018
To
https://oauth-openshift.apps.gt23-app.rtp.raleigh.ibm.com/oauth/authorize?client_id=console&redirect_uri=https%3A%2F%2Fopenshift-console.gt23-app.rtp.raleigh.ibm.com%2Fauth%2Fcallback&response_type=code&scope=user%3Afull&state=7ac58018
- Select
kubeadmin
as the authentication method on the OCP console login page. - Retrieve the password by using the
following.
cat /opt/ibm/appliance/platform/xcat/config_files/coreos/.kadm/kubeadmin-password
- Use
kubeadmin
as username and the retrieved password to login.
Upgrade to OCP 4.9 using Red Hat OpenShift web console
Procedure
- From the Red Hat OpenShift web console, go to
Administration → Cluster Settings.
- Edit the Channel information as stable-4.9.
- Click Update to update the channel. For more information, see Updating a cluster using the web console.
Upgrade OCS to Red Hat OpenShift Data Foundation (ODF) 4.9
Before you begin
ocs-operator
and
local-storage-operator
to use redhat-operators
as
shown.oc patch subscription local-storage-operator -n local-storage --type json --patch '[{"op": "replace", "path": "/spec/source", "value": "redhat-operators" }]'
oc patch subscription ocs-operator -n openshift-storage --type json --patch '[{"op": "replace", "path": "/spec/source", "value": "redhat-operators" }]'
Procedure
- On the Red Hat OpenShift web console, go to OperatorHub.
- Search for OpenShift Data Foundation using the Filter by keyword box and click the OpenShift Data Foundation tile.
- Click Install; the Install Operator page appears.
- On the Install Operator page, click Install.
Wait for the
Operator installation
to complete.For more information, see Updating Red Hat OpenShift Container Storage 4.8 to Red Hat OpenShift Data Foundation 4.9.
Upgrade the local-storage component to 4.9
You must upgrade the local-storage component to 4.9 after the completing ODF 4.9
installation.
Procedure
Upgrade to OCP 4.10 using Red Hat OpenShift web console
Procedure
- From the Red Hat OpenShift web console, go to Administration → Cluster Settings.
- Edit the Channel information as
stable-4.10
. - Click Update to update the channel. For more information, see Updating a cluster using the web console.
Upgrade OCS to Red Hat OpenShift Data Foundation 4.10
Procedure
Upgrade the local-storage component to 4.10
You must upgrade the local-storage component to 4.10 after the completing ODF 4.10
installation.
Procedure
- On the Red Hat OpenShift web console, go to Operators → Installed Operators.
- Select
local-storage
project. - Click the local storage operator name.
- Click the Subscription tab and click the link under Update Channel.
- Update channel to 4.10 and Save
it. Wait for the operator Status to change to Up to date.