Use these instructions to roll back from 1.6.5 of Netcool®
Operations Insight® to a previously
deployed 1.6.4, using the
Red Hat® OpenShift® Operator
Lifecycle Manager (OLM) user interface (UI), or the command line.
Before you begin
If you are attempting to roll back a failed upgrade, then the redis
pods may become stuck. If this occurs, then manually restart the redis pods with the following
command oc delete pod
redis*
.
Procedure
-
If Netcool
Operations Insight
1.6.4 was
upgraded to version 1.6.5 using airgap,
then, before you rollback from version 1.6.5 to 1.6.4, you must set
the image registry back to the 1.6.4 Docker image
repository. Otherwise the cem-operator and asm-operator pods
will fail with
ImagePullError
errors.
-
Edit the noi-operator deployment, and find the key-value pair for
OPERATOR_REPO. The value of this is set to the 1.6.5 airgap target
registry. Replace this value with the Netcool
Operations Insight
1.6.4 image
registry where the Netcool
Operations Insight
1.6.4
Passport Advantage® (PPA) package is uploaded,
for example
image-registry.openshift-image-registry.svc:5000/<namespace>.
oc edit deploy noi-operator
-
If the Netcool
Operations Insight
1.6.4 image
registry is authenticated and requires a pull secret, then edit the noi-operator
serviceaccount and add this secret in the
imagePullSecrets
section.
oc edit serviceaccount noi-operator
- Rollback can be performed from the command line or from the OLM
UI.
To rollback from the command line, use: oc edit noi
and change the
version back to version 1.6.4.
To rollback from the OLM UI, navigate to and then select the Cloud Deployment tab if your deployment is
only on Red Hat OpenShift, or the
Hybrid Deployment tab if you have a hybrid deployment that is on Red Hat OpenShift and on-premises. Select
Edit NOI and then the YAML tab. Change the
version back to version 1.6.4 and save the
changes.
-
Delete the
noi-topology-system-health-scheduledjob
job and the
noi-full-topology-system-health-cronjob
cron job by running the commands:
oc delete job noi-topology-system-health-scheduledjob
oc delete cronjob noi-full-topology-system-health-cronjob
Verify
that the cron job is recreated by running
oc get cronjob
. The output should
be:
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
noi-full-curator-pattern-metrics 0 0 * * * False 0 <none> 143m
noi-full-healthcron 1 * * * * False 0 28m 143m
noi-full-register-cnea-mgmt-artifact 1,*/5 * * * * False 0 4m10s 143m
noi-full-topology-system-health-cronjob */5 * * * * True 0 <none> 44m
- Obtain the metrics deployment by running the command:
oc get deployment | grep -i metric
The output is the following:
noi-metric-action-service-metricactionservice 0/0 0 0 34h
noi-metric-api-service-metricapiservice 0/0 0 0 34h
noi-metric-ingestion-service-metricingestionservice 0/0 0 0 34h
noi-metric-trigger-service-metrictriggerservice 0/0 0 0 34h
Delete the metrics deployments by using the command:
oc delete deploy <deployment name>
- Delete the
ibm-hdm-analytics-dev-v3-evt-pi-processor
and common-datarouting
deployments:
oc delete deployment <release-name>-ibm-hdm-analytics-dev-v3-evt-pi-processor
oc delete deployment <release-name>-common-datarouting