Installing the patch for version 4.6.4

After installing or upgrading Watson Knowledge Catalog to version 4.6.4, if you want to use offline backup and restore, or metadata enrichment, then this patch must be installed to fix issues with offline backup and restore, global search, and profiling. Otherwise, it is not mandatory. You may still install the patch if you plan to use these features in the future.

About this task

Important: Installing the patch is only meant for users upgrading to version 4.6.4.

A project administrator must install this patch to fix these issues in 4.6.4.

Follow the set of instructions based on your install or upgrade setup:
  • Step 1 and the substeps go through how to download and copy the images to a local private registry for an air-gapped environment.
  • Step 2 and the substeps go through applying the patch using the online IBM entitled registry, or to apply the patch using the images downloaded to the local private registry from step 1.

Procedure

  1. To apply the patch in an air-gapped environment, proceed with the following steps.
    1. Log in to the OpenShift® Console as the cluster admin.
    2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE downloads and image mirroring. An example directory path:
      ${HOME}/.airgap/auth.json
      Or create an auth.json file that contains credentials to access cp.icr.io and your local private registry. For example:
      {
      "auths": {
            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded id:password>"}
         }
      }
      For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.
    3. Install skopeo by running:
      yum install skopeo
    4. To confirm the path for the local private registry to copy the patch image, run the following command:
      oc describe pod <patch image pod> | grep -i "image:"
      The <patch image pod> variable is the pod name for any of the images that will be patched.
      For example:
      oc describe pod wkc-search-744d65b9b4-zkqj6 | grep -i "image:"
      Image: cp.icr.io/cp/cpd/wkc-search_master@sha256:24125174c2ca084da50e13d9cf24f0cd0e5175b767b0efb75fbd9830117df41d
    5. To get the local private registry source details, run the following commands:
      oc get imageContentSourcePolicy
      oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]
      The local private registry mirror repository and path details should be in the output of the describe command:
      - mirrors:
       - ${PRIVATE_REGISTRY_LOCATION}/cp/
       source: cp.icr.io/cp/cpd
      For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.
    6. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the OpenShift cluster registry:
      skopeo copy --all --authfile "<folder path>/auth.json" \
          --dest-tls-verify=false --src-tls-verify=false \
          docker://cp.icr.io/cp/cpd/wkc-search_master@sha256:42f82f6ee4f1643f07414c916fb8ce855d82aa9eb01f3a1b5249a2cae8ee1580 \
          <local private registry>/cp/cpd/wkc-search_master@sha256:42f82f6ee4f1643f07414c916fb8ce855d82aa9eb01f3a1b5249a2cae8ee1580
      skopeo copy --all --authfile "<folder path>/auth.json" \
          --dest-tls-verify=false --src-tls-verify=false \
          docker://icr.io/cpopen/ibm-cpd-ccs-operator@sha256:004d28cd5e66aa97e5c08afa0866dc9cbee8471f744dad555fc01f695526c256 \
          <local private registry>/cpopen/ibm-cpd-ccs-operator@sha256:004d28cd5e66aa97e5c08afa0866dc9cbee8471f744dad555fc01f695526c256
      skopeo copy --all --authfile "<folder path>/auth.json" \
          --dest-tls-verify=false --src-tls-verify=false \
          docker://cp.icr.io/cp/cpd/spark-hb-control-plane@sha256:0418c223d18ff02b402f915fb8a86741fad9c5f0c6c6b9150e583c0c37530061 \
          <local private registry>/cp/cpd/spark-hb-control-plane@sha256:0418c223d18ff02b402f915fb8a86741fad9c5f0c6c6b9150e583c0c37530061
      skopeo copy --all --authfile "<folder path>/auth.json" \
          --dest-tls-verify=false --src-tls-verify=false \
          docker://cp.icr.io/cp/cpd/spark-hb-jkg@sha256:1463cdd02a52c02bf271b6ccce215111289c1c0e7a9775134eb10c6e65d7fd33 \
          <local private registry>/cp/cpd/spark-hb-jkg@sha256:1463cdd02a52c02bf271b6ccce215111289c1c0e7a9775134eb10c6e65d7fd33
  2. To install the patch using the online IBM entitled registry, or to apply the patch using the images downloaded to the local private registry from step 1, proceed with the following steps.
    Note: In the following commands, ${PROJECT_CPD_INSTANCE} refers to the project name where Watson Knowledge Catalog is installed.
    1. Run the following command to patch the common core services (ccs) operator CSV:
      oc patch csv -n ${OPERATOR_NAMESPACE} ibm-cpd-ccs.v6.4.0 --type='json' -p='[{"op":"replace", "path":"/spec/install/spec/deployments/0/spec/template/spec/containers/0/image","value":"icr.io/cpopen/ibm-cpd-ccs-operator@sha256:004d28cd5e66aa97e5c08afa0866dc9cbee8471f744dad555fc01f695526c256"}]'
      In the command, replace ${OPERATOR_NAMESPACE} with the project namespace where the ccs operator has been installed, for example ibm-common-services or cpd-operators.
      Note: The ccs operator image update already contains the digest details for the wkc-search_master image included in this patch.
    2. Run the following command to apply the patch to the Analytics Engine Powered by Apache Spark custom resource (for example analyticsengine-sample):
      oc patch AnalyticsEngine analyticsengine-sample --namespace ${PROJECT_CPD_INSTANCE} --type merge --patch '{"spec": {"image_digests": {"spark-hb-control-plane":"sha256:0418c223d18ff02b402f915fb8a86741fad9c5f0c6c6b9150e583c0c37530061", "spark-hb-jkg-v33":"sha256:1463cdd02a52c02bf271b6ccce215111289c1c0e7a9775134eb10c6e65d7fd33"}}}'
    3. Wait for the ccs operator to complete the reconciliation. You can run the following command to monitor the reconciliation status:
      oc get CCS ccs-cr -n ${PROJECT_CPD_INSTANCE}
      After some time, the wkc-search_master pod in ${PROJECT_CPD_INSTANCE} will be up and running with the updated image.
    4. Wait for the Analytics Engine operator to complete the reconciliation. You can run the following command to monitor the reconciliation status:
      oc get AnalyticsEngine analyticsengine-sample -n ${PROJECT_CPD_INSTANCE}
      After some time, the spark-hb-jkg and spark-hb-control-plane pods in ${PROJECT_CPD_INSTANCE} will be up and running with the updated images.

Reverting the patch changes

Important: Users will need to revert the image overrides back to the original 4.6.4 version before installing or upgrading to a newer refresh or a new major release of IBM Cloud Pak® for Data.
To revert the image override, proceed with the following steps.
Note: ${PROJECT_CPD_INSTANCE} refers to the project name where Watson Knowledge Catalog is installed and ${OPERATOR_NAMESPACE} refers to the project where the ccs operator has been installed.
  1. To restore the ccs operator image to the original image, run the following patch command:
    oc patch csv -n ${OPERATOR_NAMESPACE} ibm-cpd-ccs.v6.4.0 --type='json' -p='[{"op":"replace", "path":"/spec/install/spec/deployments/0/spec/template/spec/containers/0/image","value":"icr.io/cpopen/ibm-cpd-ccs-operator@sha256:bcf4761c2b5131f743dd96ac512bc70e58657da89efaa154391af5ba9d3a2745"}]'
  2. Run the following command to remove the image digest updates from the Analytics Engine custom resource:
    oc patch AnalyticsEngine analyticsengine-sample --namespace ${PROJECT_CPD_INSTANCE} --type=json --patch '[{ "op": "remove", "path": "/spec/image_digests"}]'
  3. Wait for the operator to complete reconciliation. You can run the following command to monitor the reconciliation status:
    oc get CCS ccs-cr -n ${PROJECT_CPD_INSTANCE}
    After some time, the wkc-search_master pod in ${PROJECT_CPD_INSTANCE} will be up and running with the original image.
  4. Wait for the Analytics Engine operator to complete reconciliation. You can run the following command to monitor the reconciliation status:
    oc get AnalyticsEngine analyticsengine-sample -n ${PROJECT_CPD_INSTANCE}
    After some time, the spark-hb-jkg and spark-hb-control-plane pods in ${PROJECT_CPD_INSTANCE} will be up and running with the original images.