When upgrading the management subsystem from v2018 to v10, you can rollback to v2018.
About this task
The management subsystem upgrade can only be rolled back to v2018 before the upgrade CR gets into
Status: cleanUpInProgress
. The cleanUpInProgress
state is set
after the load job has successfully completed.
Complete the following steps to trigger a rollback and then clean up the cluster after a
management rollback:
Procedure
- Determine the status of the upgrade:
- To view
status:
kubectl get upgradefromv2018.management.apiconnect.ibm.com <management-subsystem-name> -n <management-subsystem-namespace>
- To view details of the
upgrade:
kubectl describe upgradefromv2018.management.apiconnect.ibm.com <management-subsystem-name> -n <management-subsystem-namespace>
Check
for the condition with Status: True
to know about the current state of the
upgrade.
A status condition of RollbackRequired
indicates that the upgrade is
unable to proceed and should be rolled back.
If any errors are encountered, see Troubleshooting installation and upgrade on Kubernetes.
- If a rollback is needed ensure that the required must gather logs have been collected.
See Logs for management upgrade: 2018 to 10 and Logs for extract and load jobs: 2018 to 10
- The Operator does not perform an automated roll back. To trigger a rollback, update the
upgrade CR:
- Edit the CR:
kubectl edit upgradefromv2018.management.apiconnect.ibm.com <management-subsystem-name> -n <management-subsystem-namespace>
- Under
spec
, add:
- Save and exit to trigger roll back of v2018 upgrade.
- When performing a rollback, the upgrade operator clean-ups any v10 resources and brings back up
the v2018 installation, as follows:
- There will be no V10 resources created if the upgrade process gets reverted before running
extract job
.
- You must wait for upgrade operator to delete v10 resources and scale up v2018 resources if the
upgrade process gets reverted after running of extract job.
- The rollback performs the following steps:
- Delete v10 secrets (if they were created). Note, however, that you must manually delete some
secrets, as follows:
- Run
kubectl get secret -n
<management-subsystem-namespace>
to fetch the secrets. Returned
secrets:
mgmt-<uuid>-postgres
mgmt-<uuid>-postgres-backrest-repo-config
mgmt-<uuid>-postgres-pgbouncer
mgmt-<uuid>-postgres-postgres-secret
mgmt-<uuid>-postgres-postgres-secret-apicuser
mgmt-<uuid>-postgres-postgres-secret-replicator
mgmt-<uuid>-postgres-primaryuser-secret
mgmt-<uuid>-postgres-testuser-secret
mgmt-ca
mgmt-client
mgmt-db-client-apicuser
mgmt-db-client-pgbouncer
mgmt-db-client-postgres
mgmt-db-client-replicator
mgmt-natscluster-mgmt
mgmt-server
mgmt-v2018-to-v10-extract-token-xxxxx
mgmt-v2018-to-v10-load-token-xxxxx
- Use the following command to delete the returned secrets:
kubectl delete secret <secretName> -n <management-subsystem-namespace>
- Restarts the Cassandra cluster. Upgrade operator waits for Cassandra to be
Healthy
before scaling up v2018 deployments
- Scale up v2018 deployment.
- Once rollback on management upgrade completes, operator status is set to
Failed
.