IBM Support

In-place upgrade and migration of Watson Knowledge Catalog: Applying patches and toolkit to an existing Watson Knowledge Catalog 4.x installation

Preventive Service Planning


Abstract

This document lists the available patches for the migration of legacy features on Watson Knowledge Catalog. Only versions that include patches are shown.

Content

 
 
 

If you haven't started migration yet, make sure to download and install the latest migration toolkit. 

 
Upgrade from Cloud Pak for Data 4.6.x
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on24 November 2025
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
 This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment:
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
    "auths": {
    "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
    "<private registry hostname>":{"email":"unused","auth":"<base64 encoded id:password>"}
    }
    }                                                                       

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
    Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
    oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
      - ${PRIVATE_REGISTRY_LOCATION}/cp/
      source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/legacy-migration@sha256:6bb3dd97329667dde965f35d749acb60000a3b6381174671081058575d9fc49d \
        docker://<local private registry>/cp/cpd/legacy-migration@sha256:6bb3dd97329667dde965f35d749acb60000a3b6381174671081058575d9fc49d
        
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207 \
        docker://<local private registry>/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
    
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed \
        docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad \
        docker://<local private registry>/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810 \
        docker://<local private registry>/cp/cpd/portal-job-manager@sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/catalog_master@sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc \
        docker://<local private registry>/cp/cpd/catalog_master@sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f \
        docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609 \
        docker://<local private registry>/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609
                                                                                            
     
To complete the installation, follow the steps in the next section.

Apply patches required for migration
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207","tag_metadata":"b84-migration-b65"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed","tag_metadata":"b84-migration-b65"},"iis_services_image":{"name":"is-services-image@sha256","tag":"4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad","tag_metadata":"b84-migration-b65"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Install the migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy-migration image is downloaded to the local registry.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):  
     

    oc patch ccs ccs-cr -n wkc --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810","catalog_api_aux_image":"sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f","catalog_api_image":"sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc","asset_files_api_image":"sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609"}}}'
     
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
        cpu: "4"
        memory: 8Gi


        To:

        limits:
        cpu: “8”
        memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
      cpu: "4"
      memory: 8Gi


      To:

      limits:
      cpu: “16”
      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
      oc adm cordon worker2
      oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
    oc adm uncordon worker1
    oc adm uncordon worker2
    oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
      cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
      cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
        --patch_type=rsi_pod_spec \
        --patch_name=pjm-scaling \
        --description="This is spec patch for scaling PJM" \
        --include_labels=app:portal-job-manager \
        --state=active \
        --spec_format=json \
        --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running                                                                                                                

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

      Limits:
        cpu:                2
        ephemeral-storage:  8Gi
        memory:             8Gi
      Requests:
        cpu:                30m
        ephemeral-storage:  10Mi
        memory:             128Mi
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

      catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
        -Dfeature.fetch_stale_data_from_couch_db=true
      catalog_api_properties_enable_activity_tracker_publishing: "false"
      catalog_api_properties_enable_global_search_publishing: "false"
      catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
      catalog_api_properties_global_call_logs: "false"
      couchdb_search_resources:
       limits:
        cpu: "2"
        memory: 3Gi
       requests:
        cpu: 250m
        memory: 256Mi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

      wkc_data_rules_resources:
        limits:
          ephemeral-storage: 2Gi
 
For medium-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
      cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
      cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
      --patch_type=rsi_pod_spec \
      --patch_name=pjm-scaling \
      --description="This is spec patch forscaling PJM" \
      --include_labels=app:portal-job-manager \
      --state=active \
      --spec_format=json \
      --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
    "bg_resources":{
         "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral-storage": "50Mi"}, 
         "limits":{"cpu": "2", "memory": "4Gi", "ephemeral-storage": "2Gi"}
    }
    }}'
     
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

      Limits:
        cpu:                2
        ephemeral-storage:  12Gi
        memory:             8Gi
      Requests:
        cpu:                30m
        ephemeral-storage:  10Mi
        memory:             128Mi
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

      catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
      catalog_api_properties_enable_activity_tracker_publishing: "false"
      catalog_api_properties_enable_global_search_publishing: "false"
      catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
      catalog_api_properties_global_call_logs: "false"
      couchdb_search_resources:
        limits:
          cpu: "2"
          memory: 3Gi
        requests:
          cpu: 250m
          memory: 256Mi
      dap_base_asset_files_resources:
        limits:
          cpu: "4"
          memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

      wkc_data_rules_resources:
         limits:
           ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
     
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_compute_image:
        Name:          is-en-compute-image@sha256
        Tag:           e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
        tag_metadata:  b84-migration-b65
    iis_en_conductor_image:
        Name:          is-engine-image@sha256
        Tag:           5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
        tag_metadata:  b84-migration-b65
    iis_services_image:
        Name:          is-services-image@sha256
        Tag:           4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
        tag_metadata:  b84-migration-b65
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"},{"op":"remove","path":"/spec/image_digests/portal-job-manager_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
 
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
Upgrade from Cloud Pak for Data 4.6.x and 4.5.x
 
Patch nameLegacy migration toolkit and IIS patches
Released on 13 December  2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.5.x
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.5.x
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.5.x or 4.6.x to 4.7.x
Install instructions
Download patch legacy-migration-patch_504020.  

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                            "auths": {
                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                            id:password>"}
                                                                                            }
                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:da2a9f0da037f6eaa2135c7e6ba18c1b1de7586bc749756ecf60a9d743b6dab9 \
                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:da2a9f0da037f6eaa2135c7e6ba18c1b1de7586bc749756ecf60a9d743b6dab9
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7 \
                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce \
                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:4b2680a4ad80b43b8873706bb49c1f8256513aeb28b1d20f7a6c0854b2446ea9 \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:4b2680a4ad80b43b8873706bb49c1f8256513aeb28b1d20f7a6c0854b2446ea9
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49 \
                                                                                                docker://<local private registry>/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4 \
                                                                                                docker://<local private registry>/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/wkc-data-rules@sha256:08134671870b684623a937707d64597b86bb84d39084aabd06069763817bf866 \
                                                                                                docker://<local private registry>/cp/cpd/wkc-data-rules@sha256:08134671870b684623a937707d64597b86bb84d39084aabd06069763817bf866
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/asset-files-api@sha256:9a335c0c7e571f3ab37e48298dbb287fd9640aa0aeddeb1a6115048df948a4c1 \
                                                                                                docker://<local private registry>/cp/cpd/asset-files-api@sha256:9a335c0c7e571f3ab37e48298dbb287fd9640aa0aeddeb1a6115048df948a4c1
     
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.
 
To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.5.x or 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207","tag_metadata":"b84-migration-b65"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed","tag_metadata":"b84-migration-b65"},"iis_services_image":{"name":"is-services-image@sha256","tag":"4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad","tag_metadata":"b84-migration-b65"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.5.x or 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.5.x or 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.7.x after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.7.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.7.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy-migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:da2a9f0da037f6eaa2135c7e6ba18c1b1de7586bc749756ecf60a9d743b6dab9","catalog_api_aux_image":"sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7","catalog_api_image":"sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce", "asset_files_api_image":"sha256:9a335c0c7e571f3ab37e48298dbb287fd9640aa0aeddeb1a6115048df948a4c1"}}}'
     
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:4b2680a4ad80b43b8873706bb49c1f8256513aeb28b1d20f7a6c0854b2446ea9","wdp_profiling_image":"sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49","wkc_mde_service_manager_image":"sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4","wkc_data_rules_image":"sha256:08134671870b684623a937707d64597b86bb84d39084aabd06069763817bf866"}}}'
     
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration, wdp-profiling, wkc-mde-service-manager, and wkc-data-rules pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Applying a hotfix

Applying the ZEN hotfix  

Note: Applying this hotfix is only meant for users upgrading to Version 4.7.x.
 
The following section details how to apply the RSI patch needed later on in the migration toolkit and patch process.
 
Applying the ZEN hotfix with the online IBM registry

For clusters configured to use the online IBM production registry, follow the steps below:
 
  1. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
                                                                                            
  2. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
                                                                                            
  3. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                            --type='json' \
                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Applying the ZEN hotfix with a private container registry
 
For airgap install clusters which need to download the amd64 image to the local private registry, follow the steps below:
 
  1. Export the following variables after setting the correct value that contains credentials to access icr.io and your local private registry. For example:

    export IBM_ENTITLEMENT_KEY=<IBM Entitlement API Key>
                                                                                            export PRIVATE_REGISTRY_LOCATION=<Local private registry hostname>
                                                                                            export PRIVATE_REGISTRY_PUSH_USER=<Private Registry login username>
                                                                                            export PRIVATE_REGISTRY_PUSH_PASSWORD=<Private Registry login password>
  2. Using the exported environment variables created above, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo login cp.icr.io -u cp -p ${IBM_ENTITLEMENT_KEY}
                                                                                            skopeo login ${PRIVATE_REGISTRY_LOCATION} -u ${PRIVATE_REGISTRY_PUSH_USER} -p ${PRIVATE_REGISTRY_PUSH_PASSWORD}
    skopeo copy --all --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50 \
                                                                                                     ${PRIVATE_REGISTRY_LOCATION}/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50
  3. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
  4. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
  5. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                            --type='json' \
                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                cpu: "4"
                                                                                                                                                memory: 8Gi


        To:

        limits:
                                                                                                                                                cpu: “8”
                                                                                                                                                memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                      cpu: "4"
                                                                                                                                      memory: 8Gi


      To:

      limits:
                                                                                                                                      cpu: “16”
                                                                                                                                      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                              oc adm cordon worker2
                                                                                                                              oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                            oc adm uncordon worker1
                                                                                                            oc adm uncordon worker2
                                                                                                            oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                              --patch_type=rsi_pod_spec \
                                                                                                                              --patch_name=pjm-scaling \
                                                                                                                              --description="This is spec patch forscaling PJM" \
                                                                                                                              --include_labels=app:portal-job-manager \
                                                                                                                              --state=active \
                                                                                                                              --spec_format=json \
                                                                                                                              --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Apply a router issue fix 
    1. Apply the router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                          kind: ZenExtension
                                                                                                                                          metadata:
                                                                                                                                            labels:
                                                                                                                                              app: wkc-lite
                                                                                                                                              app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                              app.kubernetes.io/managed-by: Tiller
                                                                                                                                              app.kubernetes.io/name: wkc-lite
                                                                                                                                              chart: wkc-lite
                                                                                                                                              helm.sh/chart: wkc-lite
                                                                                                                                              heritage: Tiller
                                                                                                                                              release: 0075-wkc-lite
                                                                                                                                            name: wkc-routes-5588
                                                                                                                                            namespace: $WKC_NAMESPACE
                                                                                                                                          spec:
                                                                                                                                            extensions: |
                                                                                                                                              [
                                                                                                                                                {
                                                                                                                                                    "extension_point_id": "zen_front_door",
                                                                                                                                                    "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                    "details": {
                                                                                                                                                      "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                    }
                                                                                                                                                }
                                                                                                                                              ]
                                                                                                                                            wkc-routes-extn.conf: |-
                                                                                                                                              set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                              location /metadata_enrichment/v3 {
                                                                                                                                                proxy_set_header Host $host;
                                                                                                                                                proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                proxy_ssl_verify       on;
                                                                                                                                                proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                proxy_ssl_server_name  on;
                                                                                                                                                proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                              }
                                                                                                                                         
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                              {
                                                                                                                                                  "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                  "name": "rsi-pjm-scaling",
                                                                                                                                                  "namespace": "wkc",
                                                                                                                                                  "patch_info": [
                                                                                                                                                      {
                                                                                                                                                          "description": "This",
                                                                                                                                                          "details": {
                                                                                                                                                              "patch_spec": [
                                                                                                                                                                  {
                                                                                                                                                                      "op": "replace",
                                                                                                                                                                      "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                      "value": "2"
                                                                                                                                                                  },
                                                                                                                                                                  {
                                                                                                                                                                      "op": "replace",
                                                                                                                                                                      "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                      "value": "8Gi"
                                                                                                                                                                  },
                                                                                                                                                                  {
                                                                                                                                                                      "op": "replace",
                                                                                                                                                                      "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                      "value": "8Gi"
                                                                                                                                                                  }
                                                                                                                                                              ],
                                                                                                                                                              "pod_selector": {
                                                                                                                                                                  "selector": {
                                                                                                                                                                      "app": "portal-job-manager"
                                                                                                                                                                  }
                                                                                                                                                              },
                                                                                                                                                              "state": "active",
                                                                                                                                                              "type": "json"
                                                                                                                                                          },
                                                                                                                                                          "display_name": "rsi-pjm-scaling",
                                                                                                                                                          "extension_name": "rsi-pjm-scaling",
                                                                                                                                                          "extension_point_id": "rsi_pod_spec",
                                                                                                                                                          "meta": {}
                                                                                                                                                      }
                                                                                                                                                  ]
                                                                                                                                              }
                                                                                                                                          ]
                                                                                                                                                  

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                              cpu:                2
                                                                                                                                                              ephemeral-storage:  8Gi
                                                                                                                                                              memory:             8Gi
                                                                                                                                                            Requests:
                                                                                                                                                              cpu:                30m
                                                                                                                                                              ephemeral-storage:  10Mi
                                                                                                                                                              memory:             128Mi
                                                                                                                                                                  
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

      catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                              catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                              catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                              catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                              catalog_api_properties_global_call_logs: "false"
                                                                                                                              couchdb_search_resources:
                                                                                                                                limits:
                                                                                                                                  cpu: "2"
                                                                                                                                  memory: 3Gi
                                                                                                                                requests:
                                                                                                                                  cpu: 250m
                                                                                                                                  memory: 256Mi
                                                                                                                                          
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

      wkc_data_rules_resources:
                                                                                                                                limits:
                                                                                                                                  ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                  
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                  
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                  
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                
  4. Apply a router issue fix 
    1. Apply a router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                                  kind: ZenExtension
                                                                                                                                                  metadata:
                                                                                                                                                    labels:
                                                                                                                                                      app: wkc-lite
                                                                                                                                                      app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                      app.kubernetes.io/managed-by: Tiller
                                                                                                                                                      app.kubernetes.io/name: wkc-lite
                                                                                                                                                      chart: wkc-lite
                                                                                                                                                      helm.sh/chart: wkc-lite
                                                                                                                                                      heritage: Tiller
                                                                                                                                                      release: 0075-wkc-lite
                                                                                                                                                    name: wkc-routes-5588
                                                                                                                                                    namespace: $WKC_NAMESPACE
                                                                                                                                                  spec:
                                                                                                                                                    extensions: |
                                                                                                                                                      [
                                                                                                                                                        {
                                                                                                                                                            "extension_point_id": "zen_front_door",
                                                                                                                                                            "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                            "details": {
                                                                                                                                                              "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                            }
                                                                                                                                                        }
                                                                                                                                                      ]
                                                                                                                                                    wkc-routes-extn.conf: |-
                                                                                                                                                      set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                      location /metadata_enrichment/v3 {
                                                                                                                                                        proxy_set_header Host $host;
                                                                                                                                                        proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                        proxy_ssl_verify       on;
                                                                                                                                                        proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                        proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                        proxy_ssl_server_name  on;
                                                                                                                                                        proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                      }
                                                                                                                                                  
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                      {
                                                                                                                                                          "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                          "name": "rsi-pjm-scaling",
                                                                                                                                                          "namespace": "wkc",
                                                                                                                                                          "patch_info": [
                                                                                                                                                              {
                                                                                                                                                                  "description": "This",
                                                                                                                                                                  "details": {
                                                                                                                                                                      "patch_spec": [
                                                                                                                                                                          {
                                                                                                                                                                              "op": "replace",
                                                                                                                                                                              "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                              "value": "2"
                                                                                                                                                                          },
                                                                                                                                                                          {
                                                                                                                                                                              "op": "replace",
                                                                                                                                                                              "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                              "value": "8Gi"
                                                                                                                                                                          },
                                                                                                                                                                          {
                                                                                                                                                                              "op": "replace",
                                                                                                                                                                              "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                              "value": "12Gi"
                                                                                                                                                                          }
                                                                                                                                                                      ],
                                                                                                                                                                      "pod_selector": {
                                                                                                                                                                          "selector": {
                                                                                                                                                                              "app": "portal-job-manager"
                                                                                                                                                                          }
                                                                                                                                                                      },
                                                                                                                                                                      "state": "active",
                                                                                                                                                                      "type": "json"
                                                                                                                                                                  },
                                                                                                                                                                  "display_name": "rsi-pjm-scaling",
                                                                                                                                                                  "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                  "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                  "meta": {}
                                                                                                                                                              }
                                                                                                                                                          ]
                                                                                                                                                      }
                                                                                                                                                  ]

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                        cpu:                2
                                                                                                                                                                        ephemeral-storage:  12Gi
                                                                                                                                                                        memory:             8Gi
                                                                                                                                                                      Requests:
                                                                                                                                                                         cpu:                30m
                                                                                                                                                                         ephemeral-storage:  10Mi
                                                                                                                                                                         memory:             128Mi
                                                                                                                                                                          
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

      catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                      catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                      catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                      catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                      catalog_api_properties_global_call_logs: "false"
                                                                                                                                      couchdb_search_resources:
                                                                                                                                        limits:
                                                                                                                                          cpu: "2"
                                                                                                                                          memory: 3Gi
                                                                                                                                        requests:
                                                                                                                                          cpu: 250m
                                                                                                                                          memory: 256Mi
                                                                                                                                      dap_base_asset_files_resources:
                                                                                                                                        limits:
                                                                                                                                          cpu: "4"
                                                                                                                                          memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

      wkc_data_rules_resources:
                                                                                                                                        limits:
                                                                                                                                          ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.5 or 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.7 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 6.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                              name: is-engine-image@sha256
                                                                                              tag: sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                              tag_metadata: b84-migration-b65
                                                                                            iis_en_compute_image:
                                                                                              name: is-en-compute-image@sha256
                                                                                              tag: sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                              tag_metadata: b84-migration-b65
                                                                                            iis_services_image:
                                                                                              name: is-services-image@sha256
                                                                                              tag: sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                              tag_metadata: b84-migration-b65
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/portal_job_manager_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"}]'
     
  2. Run the following command to remove image updates from the WKC custom resource:

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove","path": "/spec/image_digests/wkc_data_rules_image"},{ "op": "remove","path": "/spec/image_digests/wdp_profiling_image"},{ "op": "remove","path": "/spec/image_digests/wkc_mde_service_manager_image"}]'
  3. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-file pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
  4. Wait for the WKC operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the wkc-data-rules, wdp-profiling, and wkc-mde-service-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.7.x clusters, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
 
 
 
Earlier toolkits
 
 
Use the following links to locate the patches for each Cloud Pak for Data version:
 
 
 
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on17 September 2025
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
 This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment:
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
    "auths": {
    "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
    "<private registry hostname>":{"email":"unused","auth":"<base64 encoded id:password>"}
    }
    }                                                                       

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
    Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
    oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
      - ${PRIVATE_REGISTRY_LOCATION}/cp/
      source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/legacy-migration@sha256:02d84349d168967f01c100846c13fab6590fa75d5aed99aa46f4b8dd3a0861e3 \
        docker://<local private registry>/cp/cpd/legacy-migration@sha256:02d84349d168967f01c100846c13fab6590fa75d5aed99aa46f4b8dd3a0861e3
        
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207 \
        docker://<local private registry>/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
    
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed \
        docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad \
        docker://<local private registry>/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810 \
        docker://<local private registry>/cp/cpd/portal-job-manager@sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/catalog_master@sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc \
        docker://<local private registry>/cp/cpd/catalog_master@sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f \
        docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609 \
        docker://<local private registry>/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609
                                                                                            
     
To complete the installation, follow the steps in the next section.

Apply patches required for migration
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207","tag_metadata":"b84-migration-b65"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed","tag_metadata":"b84-migration-b65"},"iis_services_image":{"name":"is-services-image@sha256","tag":"4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad","tag_metadata":"b84-migration-b65"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Install the migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy-migration image is downloaded to the local registry.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):  
     

    oc patch ccs ccs-cr -n wkc --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810","catalog_api_aux_image":"sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f","catalog_api_image":"sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc","asset_files_api_image":"sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609"}}}'
     
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
        cpu: "4"
        memory: 8Gi


        To:

        limits:
        cpu: “8”
        memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
      cpu: "4"
      memory: 8Gi


      To:

      limits:
      cpu: “16”
      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
      oc adm cordon worker2
      oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
    oc adm uncordon worker1
    oc adm uncordon worker2
    oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
      cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
      cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
        --patch_type=rsi_pod_spec \
        --patch_name=pjm-scaling \
        --description="This is spec patch for scaling PJM" \
        --include_labels=app:portal-job-manager \
        --state=active \
        --spec_format=json \
        --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running                                                                                                                

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

      Limits:
        cpu:                2
        ephemeral-storage:  8Gi
        memory:             8Gi
      Requests:
        cpu:                30m
        ephemeral-storage:  10Mi
        memory:             128Mi
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

      catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
        -Dfeature.fetch_stale_data_from_couch_db=true
      catalog_api_properties_enable_activity_tracker_publishing: "false"
      catalog_api_properties_enable_global_search_publishing: "false"
      catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
      catalog_api_properties_global_call_logs: "false"
      couchdb_search_resources:
       limits:
        cpu: "2"
        memory: 3Gi
       requests:
        cpu: 250m
        memory: 256Mi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

      wkc_data_rules_resources:
        limits:
          ephemeral-storage: 2Gi
 
For medium-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
      cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
      cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
      --patch_type=rsi_pod_spec \
      --patch_name=pjm-scaling \
      --description="This is spec patch forscaling PJM" \
      --include_labels=app:portal-job-manager \
      --state=active \
      --spec_format=json \
      --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
    "bg_resources":{
         "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral-storage": "50Mi"}, 
         "limits":{"cpu": "2", "memory": "4Gi", "ephemeral-storage": "2Gi"}
    }
    }}'
     
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

      Limits:
        cpu:                2
        ephemeral-storage:  12Gi
        memory:             8Gi
      Requests:
        cpu:                30m
        ephemeral-storage:  10Mi
        memory:             128Mi
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

      catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
      catalog_api_properties_enable_activity_tracker_publishing: "false"
      catalog_api_properties_enable_global_search_publishing: "false"
      catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
      catalog_api_properties_global_call_logs: "false"
      couchdb_search_resources:
        limits:
          cpu: "2"
          memory: 3Gi
        requests:
          cpu: 250m
          memory: 256Mi
      dap_base_asset_files_resources:
        limits:
          cpu: "4"
          memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

      wkc_data_rules_resources:
         limits:
           ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
     
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_compute_image:
        Name:          is-en-compute-image@sha256
        Tag:           e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
        tag_metadata:  b84-migration-b65
    iis_en_conductor_image:
        Name:          is-engine-image@sha256
        Tag:           5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
        tag_metadata:  b84-migration-b65
    iis_services_image:
        Name:          is-services-image@sha256
        Tag:           4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
        tag_metadata:  b84-migration-b65
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"},{"op":"remove","path":"/spec/image_digests/portal-job-manager_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
 
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on01 August 2025
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
 This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment:
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
    "auths": {
    "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
    "<private registry hostname>":{"email":"unused","auth":"<base64 encoded id:password>"}
    }
    }                                                                       

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
      - ${PRIVATE_REGISTRY_LOCATION}/cp/
      source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/legacy-migration@sha256:4804b5ea8dd4d93ef6a68b3ee0e0a1fe7f155935968fd141aa4ceb4451a096c0 \
        docker://<local private registry>/cp/cpd/legacy-migration@sha256:4804b5ea8dd4d93ef6a68b3ee0e0a1fe7f155935968fd141aa4ceb4451a096c0
        
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207 \
        docker://<local private registry>/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
    
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed \
        docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad \
        docker://<local private registry>/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810 \
        docker://<local private registry>/cp/cpd/portal-job-manager@sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/catalog_master@sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc \
        docker://<local private registry>/cp/cpd/catalog_master@sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f \
        docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
        --dest-tls-verify=false --src-tls-verify=false \
        docker://cp.icr.io/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609 \
        docker://<local private registry>/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609
                                                                                            
     
To complete the installation, follow the steps in the next section.

Apply patches required for migration
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207","tag_metadata":"b84-migration-b65"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed","tag_metadata":"b84-migration-b65"},"iis_services_image":{"name":"is-services-image@sha256","tag":"4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad","tag_metadata":"b84-migration-b65"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Install the migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy-migration image is downloaded to the local registry.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):  
     

    oc patch ccs ccs-cr -n wkc --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810","catalog_api_aux_image":"sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f","catalog_api_image":"sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc","asset_files_api_image":"sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609"}}}'
     
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
        cpu: "4"
        memory: 8Gi


        To:

        limits:
        cpu: “8”
        memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
      cpu: "4"
      memory: 8Gi


      To:

      limits:
      cpu: “16”
      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
      oc adm cordon worker2
      oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
    oc adm uncordon worker1
    oc adm uncordon worker2
    oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
      cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
      cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
        --patch_type=rsi_pod_spec \
        --patch_name=pjm-scaling \
        --description="This is spec patch for scaling PJM" \
        --include_labels=app:portal-job-manager \
        --state=active \
        --spec_format=json \
        --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running                                                                                                                

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

      Limits:
        cpu:                2
        ephemeral-storage:  8Gi
        memory:             8Gi
      Requests:
        cpu:                30m
        ephemeral-storage:  10Mi
        memory:             128Mi
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

      catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
        -Dfeature.fetch_stale_data_from_couch_db=true
      catalog_api_properties_enable_activity_tracker_publishing: "false"
      catalog_api_properties_enable_global_search_publishing: "false"
      catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
      catalog_api_properties_global_call_logs: "false"
      couchdb_search_resources:
       limits:
        cpu: "2"
        memory: 3Gi
       requests:
        cpu: 250m
        memory: 256Mi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

      wkc_data_rules_resources:
        limits:
          ephemeral-storage: 2Gi
 
For medium-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
      cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
      cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
      --patch_type=rsi_pod_spec \
      --patch_name=pjm-scaling \
      --description="This is spec patch forscaling PJM" \
      --include_labels=app:portal-job-manager \
      --state=active \
      --spec_format=json \
      --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
    "bg_resources":{
         "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral-storage": "50Mi"}, 
         "limits":{"cpu": "2", "memory": "4Gi", "ephemeral-storage": "2Gi"}
    }
    }}'
     
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

      Limits:
        cpu:                2
        ephemeral-storage:  12Gi
        memory:             8Gi
      Requests:
        cpu:                30m
        ephemeral-storage:  10Mi
        memory:             128Mi
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

      catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
      catalog_api_properties_enable_activity_tracker_publishing: "false"
      catalog_api_properties_enable_global_search_publishing: "false"
      catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
      catalog_api_properties_global_call_logs: "false"
      couchdb_search_resources:
        limits:
          cpu: "2"
          memory: 3Gi
        requests:
          cpu: 250m
          memory: 256Mi
      dap_base_asset_files_resources:
        limits:
          cpu: "4"
          memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

      wkc_data_rules_resources:
         limits:
           ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
     
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_compute_image:
        Name:          is-en-compute-image@sha256
        Tag:           e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
        tag_metadata:  b84-migration-b65
    iis_en_conductor_image:
        Name:          is-engine-image@sha256
        Tag:           5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
        tag_metadata:  b84-migration-b65
    iis_services_image:
        Name:          is-services-image@sha256
        Tag:           4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
        tag_metadata:  b84-migration-b65
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"},{"op":"remove","path":"/spec/image_digests/portal-job-manager_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
 
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on4 June 2025
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
 This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment:
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                            "auths": {
                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                            id:password>"}
                                                                                            }
                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:870bc812e971a0250132dde855b37bd43056654a9a7224cf33cf4dcd1c6139db \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:870bc812e971a0250132dde855b37bd43056654a9a7224cf33cf4dcd1c6139db
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810 \
                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc \
                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f \
                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609 \
                                                                                                docker://<local private registry>/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609
                                                                                            
     
To complete the installation, follow the steps in the next section.

Apply patches required for migration
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207","tag_metadata":"b84-migration-b65"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed","tag_metadata":"b84-migration-b65"},"iis_services_image":{"name":"is-services-image@sha256","tag":"4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad","tag_metadata":"b84-migration-b65"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Install the migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy-migration image is downloaded to the local registry.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):  
     

    oc patch ccs ccs-cr -n wkc --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:2774b8726972e786f6d8eab07608e3515ded00f6c108a12767934b056b06b810","catalog_api_aux_image":"sha256:b89154389861198a21d1eecba85bf0b635224e248ff81d2d8b8d39c86fe79a1f","catalog_api_image":"sha256:4d81767c76c05dd372d278c240bcf28e0c5f02497a8ed5cad59116c6f60daacc","asset_files_api_image":"sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609"}}}'
     
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                cpu: "4"
                                                                                                                                                memory: 8Gi


        To:

        limits:
                                                                                                                                                cpu: “8”
                                                                                                                                                memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                      cpu: "4"
                                                                                                                                      memory: 8Gi


      To:

      limits:
                                                                                                                                      cpu: “16”
                                                                                                                                      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                              oc adm cordon worker2
                                                                                                                              oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                            oc adm uncordon worker1
                                                                                                            oc adm uncordon worker2
                                                                                                            oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
                                                                                                                    "bg_resources":{
                                                                                                                        "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral-storage": "50Mi"}, 
                                                                                                                        "limits":{"cpu": "2", "memory": "4Gi", "ephemeral-storage": "2Gi"}
                                                                                                                    }
                                                                                                                    }}'
     
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "4"
                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
     
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_compute_image:
                                                                                                Name:          is-en-compute-image@sha256
                                                                                                Tag:           e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                                tag_metadata:  b84-migration-b65
                                                                                            iis_en_conductor_image:
                                                                                                Name:          is-engine-image@sha256
                                                                                                Tag:           5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                                tag_metadata:  b84-migration-b65
                                                                                            iis_services_image:
                                                                                                Name:          is-services-image@sha256
                                                                                                Tag:           4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                                tag_metadata:  b84-migration-b65
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"},{"op":"remove","path":"/spec/image_digests/portal-job-manager_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
 
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on23 May 2025
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
 This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment:
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                            "auths": {
                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                            id:password>"}
                                                                                            }
                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:728b25634693be66b3b7846057a8dbe72c8b3b33f71b7f591c18d7f8fb5cade1 \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:728b25634693be66b3b7846057a8dbe72c8b3b33f71b7f591c18d7f8fb5cade1
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:07e08e8e3b837699b6e50551048ce30b0709386522692488a86398878b05430e \
                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:07e08e8e3b837699b6e50551048ce30b0709386522692488a86398878b05430e
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:3be5ff21209bca023b39747236ece9e696ec6d0e26ddf449cd5e749a2a5c9e79 \
                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:3be5ff21209bca023b39747236ece9e696ec6d0e26ddf449cd5e749a2a5c9e79
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:5851685a2e58c487060c338a73acea722cfa853b6e996f9f325b8216ff5ea81f \
                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:5851685a2e58c487060c338a73acea722cfa853b6e996f9f325b8216ff5ea81f
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609 \
                                                                                                docker://<local private registry>/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609
                                                                                            
     
To complete the installation, follow the steps in the next section.

Apply patches required for migration
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207","tag_metadata":"b84-migration-b65"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed","tag_metadata":"b84-migration-b65"},"iis_services_image":{"name":"is-services-image@sha256","tag":"4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad","tag_metadata":"b84-migration-b65"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Install the migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy-migration image is downloaded to the local registry.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n wkc --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:07e08e8e3b837699b6e50551048ce30b0709386522692488a86398878b05430e","catalog_api_aux_image":"sha256:5851685a2e58c487060c338a73acea722cfa853b6e996f9f325b8216ff5ea81f","catalog_api_image":"sha256:3be5ff21209bca023b39747236ece9e696ec6d0e26ddf449cd5e749a2a5c9e79","asset_files_api_image":"sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609"}}}'
     
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                cpu: "4"
                                                                                                                                                memory: 8Gi


        To:

        limits:
                                                                                                                                                cpu: “8”
                                                                                                                                                memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                      cpu: "4"
                                                                                                                                      memory: 8Gi


      To:

      limits:
                                                                                                                                      cpu: “16”
                                                                                                                                      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                              oc adm cordon worker2
                                                                                                                              oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                            oc adm uncordon worker1
                                                                                                            oc adm uncordon worker2
                                                                                                            oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
                                                                                                                    "bg_resources":{
                                                                                                                        "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral-storage": "50Mi"}, 
                                                                                                                        "limits":{"cpu": "2", "memory": "4Gi", "ephemeral-storage": "2Gi"}
                                                                                                                    }
                                                                                                                    }}'
     
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "4"
                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
     
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_compute_image:
                                                                                                Name:          is-en-compute-image@sha256
                                                                                                Tag:           e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                                tag_metadata:  b84-migration-b65
                                                                                            iis_en_conductor_image:
                                                                                                Name:          is-engine-image@sha256
                                                                                                Tag:           5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                                tag_metadata:  b84-migration-b65
                                                                                            iis_services_image:
                                                                                                Name:          is-services-image@sha256
                                                                                                Tag:           4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                                tag_metadata:  b84-migration-b65
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"},{"op":"remove","path":"/spec/image_digests/portal-job-manager_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
 
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on9 May 2025
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
 This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment:
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                            "auths": {
                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                            id:password>"}
                                                                                            }
                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:6e91b90e109bae351a0ccb94f6d41bb9024e91b26b525e76f2faa2f6e382d0fb \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:6e91b90e109bae351a0ccb94f6d41bb9024e91b26b525e76f2faa2f6e382d0fb
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:07e08e8e3b837699b6e50551048ce30b0709386522692488a86398878b05430e \
                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:07e08e8e3b837699b6e50551048ce30b0709386522692488a86398878b05430e
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:3be5ff21209bca023b39747236ece9e696ec6d0e26ddf449cd5e749a2a5c9e79 \
                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:3be5ff21209bca023b39747236ece9e696ec6d0e26ddf449cd5e749a2a5c9e79
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:5851685a2e58c487060c338a73acea722cfa853b6e996f9f325b8216ff5ea81f \
                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:5851685a2e58c487060c338a73acea722cfa853b6e996f9f325b8216ff5ea81f
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609 \
                                                                                                docker://<local private registry>/cp/cpd/asset-files-api@sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609
                                                                                            
     
     
To complete the installation, follow the steps in the next section.

Apply patches required for migration
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207","tag_metadata":"b84-migration-b65"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed","tag_metadata":"b84-migration-b65"},"iis_services_image":{"name":"is-services-image@sha256","tag":"4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad","tag_metadata":"b84-migration-b65"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Install the migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy-migration image is downloaded to the local registry.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n wkc --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:07e08e8e3b837699b6e50551048ce30b0709386522692488a86398878b05430e","catalog_api_aux_image":"sha256:5851685a2e58c487060c338a73acea722cfa853b6e996f9f325b8216ff5ea81f","catalog_api_image":"sha256:3be5ff21209bca023b39747236ece9e696ec6d0e26ddf449cd5e749a2a5c9e79","asset_files_api_image":"sha256:98a8f58b8bfcebc742b52fea921d24ee8eafa45e81c1c5a9f12ea1b18b2f7609"}}}'
     
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                cpu: "4"
                                                                                                                                                memory: 8Gi


        To:

        limits:
                                                                                                                                                cpu: “8”
                                                                                                                                                memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                      cpu: "4"
                                                                                                                                      memory: 8Gi


      To:

      limits:
                                                                                                                                      cpu: “16”
                                                                                                                                      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                              oc adm cordon worker2
                                                                                                                              oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                            oc adm uncordon worker1
                                                                                                            oc adm uncordon worker2
                                                                                                            oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
                                                                                                                    "bg_resources":{
                                                                                                                        "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral-storage": "50Mi"}, 
                                                                                                                        "limits":{"cpu": "2", "memory": "4Gi", "ephemeral-storage": "2Gi"}
                                                                                                                    }
                                                                                                                    }}'
     
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "4"
                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
     
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_compute_image:
                                                                                                Name:          is-en-compute-image@sha256
                                                                                                Tag:           e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                                tag_metadata:  b84-migration-b65
                                                                                            iis_en_conductor_image:
                                                                                                Name:          is-engine-image@sha256
                                                                                                Tag:           5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                                tag_metadata:  b84-migration-b65
                                                                                            iis_services_image:
                                                                                                Name:          is-services-image@sha256
                                                                                                Tag:           4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                                tag_metadata:  b84-migration-b65
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"},{"op":"remove","path":"/spec/image_digests/portal-job-manager_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
 
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on11 March 2025
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
 This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment:
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                            "auths": {
                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                            id:password>"}
                                                                                            }
                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:  
    For Cloud Pak for Data version 4.8.8:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:95f31c937e27b7dbf882ba72de42d36b088c91667bfcc88819f895bb0bd58abb \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:95f31c937e27b7dbf882ba72de42d36b088c91667bfcc88819f895bb0bd58abb

     

    For Cloud Pak for Data version 4.8.7:
                                                                                            
                                                                                            
    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:95f31c937e27b7dbf882ba72de42d36b088c91667bfcc88819f895bb0bd58abb \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:95f31c937e27b7dbf882ba72de42d36b088c91667bfcc88819f895bb0bd58abb
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:03185c8997eb1292ad6590baa8af745f924c6eb6d925c2a170df351d1b458f3a \
                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:03185c8997eb1292ad6590baa8af745f924c6eb6d925c2a170df351d1b458f3a
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:17914fde452acd33197dfc7b3abcd156851b8e9dc7efd88cb37053a5a1a0e904 \
                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:17914fde452acd33197dfc7b3abcd156851b8e9dc7efd88cb37053a5a1a0e904
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/asset-files-api@sha256:0dbab987b768d26349d664db67382a658efd993e2d3ef05cc3dca96c05c3f345 \
                                                                                                docker://<local private registry>/cp/cpd/asset-files-api@sha256:0dbab987b768d26349d664db67382a658efd993e2d3ef05cc3dca96c05c3f345
     
To complete the installation, follow the steps in the next section.

Apply patches required for migration
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207","tag_metadata":"b84-migration-b65"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed","tag_metadata":"b84-migration-b65"},"iis_services_image":{"name":"is-services-image@sha256","tag":"4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad","tag_metadata":"b84-migration-b65"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Install the migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy-migration image is downloaded to the local registry.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):  
    Only for Cloud Pak for Data version 4.8.7:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"catalog_api_aux_image":"sha256:03185c8997eb1292ad6590baa8af745f924c6eb6d925c2a170df351d1b458f3a","catalog_api_image":"sha256:17914fde452acd33197dfc7b3abcd156851b8e9dc7efd88cb37053a5a1a0e904", "asset_files_api_image":"sha256:0dbab987b768d26349d664db67382a658efd993e2d3ef05cc3dca96c05c3f345"}}}'
     
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                cpu: "4"
                                                                                                                                                memory: 8Gi


        To:

        limits:
                                                                                                                                                cpu: “8”
                                                                                                                                                memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                      cpu: "4"
                                                                                                                                      memory: 8Gi


      To:

      limits:
                                                                                                                                      cpu: “16”
                                                                                                                                      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                              oc adm cordon worker2
                                                                                                                              oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                            oc adm uncordon worker1
                                                                                                            oc adm uncordon worker2
                                                                                                            oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters  
Note: Ensure the cpd-cli is installed correctly and that $PATH has been set properly.
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and copy the following content into the specpatch.json file. If $CPD_CLI_MANAGE_WORKSPACE is defined, save it under the $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, otherwise, save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
       
    3. Run the patch:

      cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
       
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
                                                                                                                    "bg_resources":{
                                                                                                                        "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral-storage": "50Mi"}, 
                                                                                                                        "limits":{"cpu": "2", "memory": "4Gi", "ephemeral-storage": "2Gi"}
                                                                                                                    }
                                                                                                                    }}'
     
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "4"
                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
     
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
     
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_compute_image:
                                                                                                Name:          is-en-compute-image@sha256
                                                                                                Tag:           e8227bd2129f487475c22bce1c4dab523e1b67dc638d046ec67c205ead18c6ed
                                                                                                tag_metadata:  b84-migration-b65
                                                                                            iis_en_conductor_image:
                                                                                                Name:          is-engine-image@sha256
                                                                                                Tag:           5cceb05202bfd48e35c8709425d8416ad9e0937363797ac7badc0436c7296207
                                                                                                tag_metadata:  b84-migration-b65
                                                                                            iis_services_image:
                                                                                                Name:          is-services-image@sha256
                                                                                                Tag:           4d83be85a89c2eb45f82eb55d919d32e874ed3952bf25ddf8121132af17e13ad
                                                                                                tag_metadata:  b84-migration-b65
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
Follow these steps to revert the migration toolkit support image patches:  
The following steps only apply to Cloud Pak for Data version 4.8.7:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"}]'
                                                                                                    
     
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and asset-file pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
 
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on30 January 2025
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
 This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                            "auths": {
                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                            id:password>"}
                                                                                            }
                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:  
    For Cloud Pak for Data version 4.8.8:  
     

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307 \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288 \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:90d12c276279f0d7db9b885f55016ee30b571432ba1f24f9bab59fefd1fecdd1 \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:90d12c276279f0d7db9b885f55016ee30b571432ba1f24f9bab59fefd1fecdd1
                                                                                            
    For Cloud Pak for Data version 4.8.7:
    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307 \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288 \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:90d12c276279f0d7db9b885f55016ee30b571432ba1f24f9bab59fefd1fecdd1 \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:90d12c276279f0d7db9b885f55016ee30b571432ba1f24f9bab59fefd1fecdd1
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:03185c8997eb1292ad6590baa8af745f924c6eb6d925c2a170df351d1b458f3a \
                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:03185c8997eb1292ad6590baa8af745f924c6eb6d925c2a170df351d1b458f3a
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:17914fde452acd33197dfc7b3abcd156851b8e9dc7efd88cb37053a5a1a0e904 \
                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:17914fde452acd33197dfc7b3abcd156851b8e9dc7efd88cb37053a5a1a0e904
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/asset-files-api@sha256:0dbab987b768d26349d664db67382a658efd993e2d3ef05cc3dca96c05c3f345 \
                                                                                                docker://<local private registry>/cp/cpd/asset-files-api@sha256:0dbab987b768d26349d664db67382a658efd993e2d3ef05cc3dca96c05c3f345
                                                                                            
     
     
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89","tag_metadata":"b80-migration-b57"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307","tag_metadata":"b80-migration-b57"},"iis_services_image":{"name":"is-services-image@sha256","tag":"adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288","tag_metadata":"b80-migration-b57"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):  
    Only for Cloud Pak for Data version 4.8.7:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"catalog_api_aux_image":"sha256:03185c8997eb1292ad6590baa8af745f924c6eb6d925c2a170df351d1b458f3a","catalog_api_image":"sha256:17914fde452acd33197dfc7b3abcd156851b8e9dc7efd88cb37053a5a1a0e904", "asset_files_api_image":"sha256:0dbab987b768d26349d664db67382a658efd993e2d3ef05cc3dca96c05c3f345"}}}'
     
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                cpu: "4"
                                                                                                                                                memory: 8Gi


        To:

        limits:
                                                                                                                                                cpu: “8”
                                                                                                                                                memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                      cpu: "4"
                                                                                                                                      memory: 8Gi


      To:

      limits:
                                                                                                                                      cpu: “16”
                                                                                                                                      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                              oc adm cordon worker2
                                                                                                                              oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                            oc adm uncordon worker1
                                                                                                            oc adm uncordon worker2
                                                                                                            oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
                                                                                                                    "bg_resources":{
                                                                                                                        "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral-storage": "50Mi"}, 
                                                                                                                        "limits":{"cpu": "2", "memory": "4Gi", "ephemeral-storage": "2Gi"}
                                                                                                                    }
                                                                                                                    }}'
     
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "4"
                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
     
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                              name: is-engine-image@sha256
                                                                                              tag: sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
                                                                                              tag_metadata: b80-migration-b57
                                                                                            iis_en_compute_image:
                                                                                              name: is-en-compute-image@sha256
                                                                                              tag: sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
                                                                                              tag_metadata: b80-migration-b57
                                                                                            iis_services_image:
                                                                                              name: is-services-image@sha256
                                                                                              tag: sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
                                                                                              tag_metadata: b80-migration-b57
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
Follow these steps to revert the migration toolkit support image patches:  
The following steps only apply to Cloud Pak for Data version 4.8.7:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"}]'
                                                                                                    
     
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and asset-file pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
 
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on13 December 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
 This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                            "auths": {
                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                            id:password>"}
                                                                                            }
                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307 \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288 \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:4b2680a4ad80b43b8873706bb49c1f8256513aeb28b1d20f7a6c0854b2446ea9 \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:4b2680a4ad80b43b8873706bb49c1f8256513aeb28b1d20f7a6c0854b2446ea9
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:03185c8997eb1292ad6590baa8af745f924c6eb6d925c2a170df351d1b458f3a \
                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:03185c8997eb1292ad6590baa8af745f924c6eb6d925c2a170df351d1b458f3a
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:17914fde452acd33197dfc7b3abcd156851b8e9dc7efd88cb37053a5a1a0e904 \
                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:17914fde452acd33197dfc7b3abcd156851b8e9dc7efd88cb37053a5a1a0e904
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/asset-files-api@sha256:0dbab987b768d26349d664db67382a658efd993e2d3ef05cc3dca96c05c3f345 \
                                                                                                docker://<local private registry>/cp/cpd/asset-files-api@sha256:0dbab987b768d26349d664db67382a658efd993e2d3ef05cc3dca96c05c3f345
                                                                                            
     
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89","tag_metadata":"b80-migration-b57"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307","tag_metadata":"b80-migration-b57"},"iis_services_image":{"name":"is-services-image@sha256","tag":"adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288","tag_metadata":"b80-migration-b57"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"catalog_api_aux_image":"sha256:03185c8997eb1292ad6590baa8af745f924c6eb6d925c2a170df351d1b458f3a","catalog_api_image":"sha256:17914fde452acd33197dfc7b3abcd156851b8e9dc7efd88cb37053a5a1a0e904", "asset_files_api_image":"sha256:0dbab987b768d26349d664db67382a658efd993e2d3ef05cc3dca96c05c3f345"}}}'
     
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                cpu: "4"
                                                                                                                                                memory: 8Gi


        To:

        limits:
                                                                                                                                                cpu: “8”
                                                                                                                                                memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                      cpu: "4"
                                                                                                                                      memory: 8Gi


      To:

      limits:
                                                                                                                                      cpu: “16”
                                                                                                                                      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                              oc adm cordon worker2
                                                                                                                              oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                            oc adm uncordon worker1
                                                                                                            oc adm uncordon worker2
                                                                                                            oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
                                                                                                                    "bg_resources":{
                                                                                                                        "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral_storage": "50Mi"}, 
                                                                                                                        "limits":{"cpu": "2", "memory": "4Gi", "ephemeral_storage": "2Gi"}
                                                                                                                    }
                                                                                                                    }}'
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "4"
                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
     
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                              name: is-engine-image@sha256
                                                                                              tag: sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
                                                                                              tag_metadata: b80-migration-b57
                                                                                            iis_en_compute_image:
                                                                                              name: is-en-compute-image@sha256
                                                                                              tag: sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
                                                                                              tag_metadata: b80-migration-b57
                                                                                            iis_services_image:
                                                                                              name: is-services-image@sha256
                                                                                              tag: sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
                                                                                              tag_metadata: b80-migration-b57
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"}]'
                                                                                                    
     
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and asset-file pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
 
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on18 November 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                            "auths": {
                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                            id:password>"}
                                                                                            }
                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307 \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288 \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:cf2518f9fbc36100e677d007d10fe48c3be14551cf8494fc582cfec6362fc0ed \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:cf2518f9fbc36100e677d007d10fe48c3be14551cf8494fc582cfec6362fc0ed
                                                                                            
     
     
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89","tag_metadata":"b80-migration-b57"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307","tag_metadata":"b80-migration-b57"},"iis_services_image":{"name":"is-services-image@sha256","tag":"adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288","tag_metadata":"b80-migration-b57"}}}'
       
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  3. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
  4. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                cpu: "4"
                                                                                                                                                memory: 8Gi


        To:

        limits:
                                                                                                                                                cpu: “8”
                                                                                                                                                memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                      cpu: "4"
                                                                                                                                      memory: 8Gi


      To:

      limits:
                                                                                                                                      cpu: “16”
                                                                                                                                      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                              oc adm cordon worker2
                                                                                                                              oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                            oc adm uncordon worker1
                                                                                                            oc adm uncordon worker2
                                                                                                            oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
                                                                                                                    "bg_resources":{
                                                                                                                        "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral_storage": "50Mi"}, 
                                                                                                                        "limits":{"cpu": "2", "memory": "4Gi", "ephemeral_storage": "2Gi"}
                                                                                                                    }
                                                                                                                    }}'
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "4"
                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
     
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                              name: is-engine-image@sha256
                                                                                              tag: sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
                                                                                              tag_metadata: b80-migration-b57
                                                                                            iis_en_compute_image:
                                                                                              name: is-en-compute-image@sha256
                                                                                              tag: sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
                                                                                              tag_metadata: b80-migration-b57
                                                                                            iis_services_image:
                                                                                              name: is-services-image@sha256
                                                                                              tag: sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
                                                                                              tag_metadata: b80-migration-b57
     
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on8 October 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
Downloading the legacy migration and IIS patch images in an air-gapped environment
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                            "auths": {
                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                            id:password>"}
                                                                                            }
                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307 \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288 \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:da2a9f0da037f6eaa2135c7e6ba18c1b1de7586bc749756ecf60a9d743b6dab9 \
                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:da2a9f0da037f6eaa2135c7e6ba18c1b1de7586bc749756ecf60a9d743b6dab9
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7 \
                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce \
                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:33b5591c0a0945e84e55139e6fe8f7e6753250f62d74af2a0d3d3acf89c28a52 \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:33b5591c0a0945e84e55139e6fe8f7e6753250f62d74af2a0d3d3acf89c28a52
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/asset-files-api@sha256:9a335c0c7e571f3ab37e48298dbb287fd9640aa0aeddeb1a6115048df948a4c1 \
                                                                                                docker://<local private registry>/cp/cpd/asset-files-api@sha256:9a335c0c7e571f3ab37e48298dbb287fd9640aa0aeddeb1a6115048df948a4c1
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89","tag_metadata":"b80-migration-b57"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307","tag_metadata":"b80-migration-b57"},"iis_services_image":{"name":"is-services-image@sha256","tag":"adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288","tag_metadata":"b80-migration-b57"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
Upgrade the cluster to 4.8.x
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:da2a9f0da037f6eaa2135c7e6ba18c1b1de7586bc749756ecf60a9d743b6dab9","catalog_api_aux_image":"sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7","catalog_api_image":"sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce", "asset_files_api_image":"sha256:9a335c0c7e571f3ab37e48298dbb287fd9640aa0aeddeb1a6115048df948a4c1"}}}'
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knowledge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                                        cpu: "4"
                                                                                                                                                                        memory: 8Gi


        To:

        limits:
                                                                                                                                                                        cpu: “8”
                                                                                                                                                                        memory: 16Gi
    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                                              cpu: "4"
                                                                                                                                                              memory: 8Gi


      To:

      limits:
                                                                                                                                                              cpu: “16”
                                                                                                                                                              memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                                                      oc adm cordon worker2
                                                                                                                                                      oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                                                    oc adm uncordon worker1
                                                                                                                                    oc adm uncordon worker2
                                                                                                                                    oc adm uncordon worker4
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                            oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch for scaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  8Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                          -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                      
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                              
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch forscaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                              
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                            
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                            
  4. Patch the WKC CR/Glossary CR to increase the resource limit for the glossary service:

    oc patch wkc wkc-cr --type merge --patch '{"spec": {
                                                                                                                            "bg_resources":{
                                                                                                                                "requests":{"cpu": "250m", "memory": "512Mi", "ephemeral_storage": "50Mi"}, 
                                                                                                                                "limits":{"cpu": "2", "memory": "4Gi", "ephemeral_storage": "2Gi"}
                                                                                                                            }
                                                                                                                            }}'
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  12Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                                dap_base_asset_files_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "4"
                                                                                                                                                    memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Remove the glossary service resource limit changes:

    oc patch wkc wkc-cr --type json -p '[{"op": "remove", "path": "/spec/bg_resources" }]'
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                              name: is-engine-image@sha256
                                                                                              tag: sha256:f1e6f8c3d3cce79625b9981b3a10cf13940aaa0b9f842f8cf45e0d22f12b1f89
                                                                                              tag_metadata: b80-migration-b57
                                                                                            iis_en_compute_image:
                                                                                              name: is-en-compute-image@sha256
                                                                                              tag: sha256:84c3a660585e3a92656bfbbe01b16dfa4dc1a6633605aa807096ddd9078bc307
                                                                                              tag_metadata: b80-migration-b57
                                                                                            iis_services_image:
                                                                                              name: is-services-image@sha256
                                                                                              tag: sha256:adb73b46d54f5da15eae8713f6f812d712212e23a905633c23431e5b13fb6288
                                                                                              tag_metadata: b80-migration-b57
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
Follow these steps to revert the migration toolkit support image patches:
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{"op":"remove","path":"/spec/image_digests/portal_job_manager_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/asset_files_api_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api, portal-job-manager and asset-files pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original pod images.
Re-sync processes
Re-sync processes
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
Cloud Pak for Data 4.6.x patches (upgrades to Cloud Pak for Data 4.8.5 version 3)
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on3 July 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.x
Install instructions
 Download patch legacy-migration-patch_508.

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                            "auths": {
                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                            id:password>"}
                                                                                            }
                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2 \
                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551 \
                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59 \
                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c \
                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7 \
                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce \
                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce
                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:e955b2258c9a98a33ba07d00603a6f3614c93a78d5a7011c0cf6763ae48b9916 \
                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:e955b2258c9a98a33ba07d00603a6f3614c93a78d5a7011c0cf6763ae48b9916
                                                                                            
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2","tag_metadata":"b71-migration-b54"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551","tag_metadata":"b71-migration-b54"},"iis_services_image":{"name":"is-services-image@sha256","tag":"a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59","tag_metadata":"b71-migration-b54"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c","catalog_api_aux_image":"sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7","catalog_api_image":"sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce"}}}'
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                cpu: "4"
                                                                                                                                                memory: 8Gi


        To:

        limits:
                                                                                                                                                cpu: “8”
                                                                                                                                                memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                      cpu: "4"
                                                                                                                                      memory: 8Gi


      To:

      limits:
                                                                                                                                      cpu: “16”
                                                                                                                                      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                              oc adm cordon worker2
                                                                                                                              oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                            oc adm uncordon worker1
                                                                                                            oc adm uncordon worker2
                                                                                                            oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                    oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "4"
                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
  5. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  5. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                              name: is-engine-image@sha256
                                                                                              tag: sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                              tag_metadata: b71-migration-b54
                                                                                            iis_en_compute_image:
                                                                                              name: is-en-compute-image@sha256
                                                                                              tag: sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                              tag_metadata: b71-migration-b54
                                                                                            iis_services_image:
                                                                                              name: is-services-image@sha256
                                                                                              tag: sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                              tag_metadata: b71-migration-b54
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} -type=json --patch '[{"op":"remove","path":"/spec/image_digests/portal_job_manager_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original pod images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 

Cloud Pak for Data 4.6.x (upgrades to Cloud Pak for Data 4.8.5 version 2)

Patch nameLegacy migration toolkit and IIS patches
Released onJune 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.5
Install instructions
 Download the patch here.

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                    "auths": {
                                                                                                    "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                    "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                    id:password>"}
                                                                                                    }
                                                                                                    }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                    Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                    oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                      - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                      source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2 \
                                                                                                        docker://<local private registry>/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551 \
                                                                                                        docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59 \
                                                                                                        docker://<local private registry>/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c \
                                                                                                        docker://<local private registry>/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7 \
                                                                                                        docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce \
                                                                                                        docker://<local private registry>/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/legacy-migration@sha256:c48ae7c6b342ec3279883e77238015d9b96263c91c5c0aafbdb148c9e78a2bf6 \
                                                                                                        docker://<local private registry>/cp/cpd/legacy-migration@sha256:c48ae7c6b342ec3279883e77238015d9b96263c91c5c0aafbdb148c9e78a2bf6
                                                                                                    
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2","tag_metadata":"b71-migration-b54"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551","tag_metadata":"b71-migration-b54"},"iis_services_image":{"name":"is-services-image@sha256","tag":"a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59","tag_metadata":"b71-migration-b54"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.5
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.5, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c","catalog_api_aux_image":"sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7","catalog_api_image":"sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce"}}}'
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
     
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                        --description="This is spec patch for scaling PJM" \
                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                        --state=active \
                                                                                                                        --spec_format=json \
                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                    cpu:                2
                                                                                                                                                    ephemeral-storage:  8Gi
                                                                                                                                                    memory:             8Gi
                                                                                                                                                  Requests:
                                                                                                                                                    cpu:                30m
                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                    memory:             128Mi
                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                          -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                        couchdb_search_resources:
                                                                                                                          limits:
                                                                                                                            cpu: "2"
                                                                                                                            memory: 3Gi
                                                                                                                          requests:
                                                                                                                            cpu: 250m
                                                                                                                            memory: 256Mi
                                                                                                                      
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                          limits:
                                                                                                                            ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                              
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch forscaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                              
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                            
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                            
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  12Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                                dap_base_asset_files_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "4"
                                                                                                                                    memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
  5. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  5. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                      name: is-engine-image@sha256
                                                                                                      tag: sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                      tag_metadata: b71-migration-b54
                                                                                                    iis_en_compute_image:
                                                                                                      name: is-en-compute-image@sha256
                                                                                                      tag: sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                      tag_metadata: b71-migration-b54
                                                                                                    iis_services_image:
                                                                                                      name: is-services-image@sha256
                                                                                                      tag: sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                      tag_metadata: b71-migration-b54
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} -type=json --patch '[{"op":"remove","path":"/spec/image_digests/portal_job_manager_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original pod images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
Cloud Pak for Data 4.6.x (upgrades to Cloud Pak for Data 4.8.5)
Patch nameLegacy migration toolkit and IIS patches
Released onApril 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.5
Install instructions
 Download the patch here.

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                    "auths": {
                                                                                                    "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                    "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                    id:password>"}
                                                                                                    }
                                                                                                    }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                    Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                    oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                      - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                      source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2 \
                                                                                                        docker://<local private registry>/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551 \
                                                                                                        docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59 \
                                                                                                        docker://<local private registry>/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c \
                                                                                                        docker://<local private registry>/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7 \
                                                                                                        docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce \
                                                                                                        docker://<local private registry>/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce
                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                        docker://cp.icr.io/cp/cpd/legacy-migration@sha256:67e035d353eb2cb1ed37dd2634fe8ea06f74ba7a5675c2f17dd9a6a690096edd \
                                                                                                        docker://<local private registry>/cp/cpd/legacy-migration@sha256:67e035d353eb2cb1ed37dd2634fe8ea06f74ba7a5675c2f17dd9a6a690096edd
                                                                                                    
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2","tag_metadata":"b71-migration-b54"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551","tag_metadata":"b71-migration-b54"},"iis_services_image":{"name":"is-services-image@sha256","tag":"a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59","tag_metadata":"b71-migration-b54"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.5 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.5
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.5, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c","catalog_api_aux_image":"sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7","catalog_api_image":"sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce"}}}'
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                        --description="This is spec patch for scaling PJM" \
                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                        --state=active \
                                                                                                                        --spec_format=json \
                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                    cpu:                2
                                                                                                                                                    ephemeral-storage:  8Gi
                                                                                                                                                    memory:             8Gi
                                                                                                                                                  Requests:
                                                                                                                                                    cpu:                30m
                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                    memory:             128Mi
                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                          -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                        couchdb_search_resources:
                                                                                                                          limits:
                                                                                                                            cpu: "2"
                                                                                                                            memory: 3Gi
                                                                                                                          requests:
                                                                                                                            cpu: 250m
                                                                                                                            memory: 256Mi
                                                                                                                      
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                          limits:
                                                                                                                            ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                              
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch forscaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                              
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                            
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                            
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  12Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                                dap_base_asset_files_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "4"
                                                                                                                                    memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
  5. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
 
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  5. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                      name: is-engine-image@sha256
                                                                                                      tag: sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                      tag_metadata: b71-migration-b54
                                                                                                    iis_en_compute_image:
                                                                                                      name: is-en-compute-image@sha256
                                                                                                      tag: sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                      tag_metadata: b71-migration-b54
                                                                                                    iis_services_image:
                                                                                                      name: is-services-image@sha256
                                                                                                      tag: sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                      tag_metadata: b71-migration-b54
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} -type=json --patch '[{"op":"remove","path":"/spec/image_digests/portal_job_manager_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original pod images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.5 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 

Cloud Pak for Data 4.6.x (upgrades to Cloud Pak for Data 4.8.4)

Patch nameLegacy migration toolkit and IIS patches
Released onApril 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.4
Install instructions
Download the patch here.

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                            "auths": {
                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                            id:password>"}
                                                                                                            }
                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2 \
                                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551 \
                                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59 \
                                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:73cd4ea8978517734417213f8ce8e28ff57c6109ba3f48225392f05a5a21c70b \
                                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:73cd4ea8978517734417213f8ce8e28ff57c6109ba3f48225392f05a5a21c70b
                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:91d4456b986f8fded18a4fa91c7f1b2727fed9b37028e862e577e11c3f163f05 \
                                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:91d4456b986f8fded18a4fa91c7f1b2727fed9b37028e862e577e11c3f163f05
                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:d59caf7a2f67ff0c21d966e1503c7d1d3b2fb93aedde091dc4c7c30b67395d14 \
                                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:d59caf7a2f67ff0c21d966e1503c7d1d3b2fb93aedde091dc4c7c30b67395d14
                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:a37a79c72697123d84ae5cdc0458d0a23578fa0ec6f7f95d081d61597c6baa4c \
                                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:a37a79c72697123d84ae5cdc0458d0a23578fa0ec6f7f95d081d61597c6baa4c
                                                                                                            
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2","tag_metadata":"b71-migration-b54"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551","tag_metadata":"b71-migration-b54"},"iis_services_image":{"name":"is-services-image@sha256","tag":"a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59","tag_metadata":"b71-migration-b54"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.4 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.4
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.4, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:73cd4ea8978517734417213f8ce8e28ff57c6109ba3f48225392f05a5a21c70b","catalog_api_aux_image":"sha256:91d4456b986f8fded18a4fa91c7f1b2727fed9b37028e862e577e11c3f163f05","catalog_api_image":"sha256:d59caf7a2f67ff0c21d966e1503c7d1d3b2fb93aedde091dc4c7c30b67395d14"}}}'
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                --state=active \
                                                                                                                                --spec_format=json \
                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                            cpu:                2
                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                            memory:             8Gi
                                                                                                                                                          Requests:
                                                                                                                                                            cpu:                30m
                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                            memory:             128Mi
                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                couchdb_search_resources:
                                                                                                                                  limits:
                                                                                                                                    cpu: "2"
                                                                                                                                    memory: 3Gi
                                                                                                                                  requests:
                                                                                                                                    cpu: 250m
                                                                                                                                    memory: 256Mi
                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                  limits:
                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                    
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "4"
                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
  5. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  5. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                              name: is-engine-image@sha256
                                                                                                              tag: sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                              tag_metadata: b71-migration-b54
                                                                                                            iis_en_compute_image:
                                                                                                              name: is-en-compute-image@sha256
                                                                                                              tag: sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                              tag_metadata: b71-migration-b54
                                                                                                            iis_services_image:
                                                                                                              name: is-services-image@sha256
                                                                                                              tag: sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                              tag_metadata: b71-migration-b54
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} -type=json --patch '[{"op":"remove","path":"/spec/image_digests/portal_job_manager_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original pod images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.4 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 

Cloud Pak for Data 4.6.x (upgrades to Cloud Pak for Data 4.8.3)

Patch nameLegacy migration toolkit and IIS patches
Released onFebruary 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.3
Install instructions
Download the patch here.

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                    "auths": {
                                                                                                                    "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                    "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                    id:password>"}
                                                                                                                    }
                                                                                                                    }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                    Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                    oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                      - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                      source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                        docker://cp.icr.io/cp/cpd/is-engine-image@sha256:0438b8ea0758899bed2b967525d21f6346b089cb32aeab013e72693e2552382b \
                                                                                                                        docker://<local private registry>/cp/cpd/is-engine-image@sha256:0438b8ea0758899bed2b967525d21f6346b089cb32aeab013e72693e2552382b
                                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                        docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:b313225799d479d3614ce34cb4ba37fe6205a37846a2144d69fe972b4da73e43 \
                                                                                                                        docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:b313225799d479d3614ce34cb4ba37fe6205a37846a2144d69fe972b4da73e43
                                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                        docker://cp.icr.io/cp/cpd/is-services-image@sha256:9be11faef6f0c2f1b9cdc9cefd2809b088e3b5ec0c014f9f3dd446ff126bd0f3 \
                                                                                                                        docker://<local private registry>/cp/cpd/is-services-image@sha256:9be11faef6f0c2f1b9cdc9cefd2809b088e3b5ec0c014f9f3dd446ff126bd0f3
                                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                        docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:564b7bba26d3272924b698207c4101856957251fe84cf94845c41d8211ee1612 \
                                                                                                                        docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:564b7bba26d3272924b698207c4101856957251fe84cf94845c41d8211ee1612
                                                                                                                    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                        --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                        docker://cp.icr.io/cp/cpd/legacy-migration@sha256:007ef84d3f24c3eec1a01d34a41ad8193db89062d8d3785a4b2074183c983124 \
                                                                                                                        docker://<local private registry>/cp/cpd/legacy-migration@sha256:007ef84d3f24c3eec1a01d34a41ad8193db89062d8d3785a4b2074183c983124
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"0438b8ea0758899bed2b967525d21f6346b089cb32aeab013e72693e2552382b","tag_metadata":"b68-migration-b52"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"b313225799d479d3614ce34cb4ba37fe6205a37846a2144d69fe972b4da73e43","tag_metadata":"b68-migration-b52"},"iis_services_image":{"name":"is-services-image@sha256","tag":"9be11faef6f0c2f1b9cdc9cefd2809b088e3b5ec0c014f9f3dd446ff126bd0f3","tag_metadata":"b68-migration-b52"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:9be11faef6f0c2f1b9cdc9cefd2809b088e3b5ec0c014f9f3dd446ff126bd0f3
                                                                                                                                      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:b313225799d479d3614ce34cb4ba37fe6205a37846a2144d69fe972b4da73e43
                                                                                                                                      oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:0438b8ea0758899bed2b967525d21f6346b089cb32aeab013e72693e2552382b
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.3 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.3
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.3, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"catalog_api_aux_image":"sha256:564b7bba26d3272924b698207c4101856957251fe84cf94845c41d8211ee1612"}}}'
  3. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  4. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  5. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                        --description="This is spec patch for scaling PJM" \
                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                        --state=active \
                                                                                                                                        --spec_format=json \
                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                    cpu:                2
                                                                                                                                                                    ephemeral-storage:  8Gi
                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                  Requests:
                                                                                                                                                                    cpu:                30m
                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                    memory:             128Mi
                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                          -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                        couchdb_search_resources:
                                                                                                                                          limits:
                                                                                                                                            cpu: "2"
                                                                                                                                            memory: 3Gi
                                                                                                                                          requests:
                                                                                                                                            cpu: 250m
                                                                                                                                            memory: 256Mi
                                                                                                                                      
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                          limits:
                                                                                                                                            ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                              
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch forscaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                              
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                            
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                            
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  12Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                                dap_base_asset_files_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "4"
                                                                                                                                                    memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
  5. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  5. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                                      name: is-engine-image@sha256
                                                                                                                      tag: sha256:0438b8ea0758899bed2b967525d21f6346b089cb32aeab013e72693e2552382b
                                                                                                                      tag_metadata: b68-migration-b52
                                                                                                                    iis_en_compute_image:
                                                                                                                      name: is-en-compute-image@sha256
                                                                                                                      tag: sha256:b313225799d479d3614ce34cb4ba37fe6205a37846a2144d69fe972b4da73e43
                                                                                                                      tag_metadata: b68-migration-b52
                                                                                                                    iis_services_image:
                                                                                                                      name: is-services-image@sha256
                                                                                                                      tag: sha256:9be11faef6f0c2f1b9cdc9cefd2809b088e3b5ec0c014f9f3dd446ff126bd0f3
                                                                                                                      tag_metadata: b68-migration-b52
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} -type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original pod images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.3 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 

Cloud Pak for Data 4.6.x (upgrades to Cloud Pak for Data 4.8.2)

Patch nameLegacy migration toolkit and IIS patches
Released onJanuary 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.2
Install instructions
Download the patch here.

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                            "auths": {
                                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                            id:password>"}
                                                                                                                            }
                                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:45968624ade4260b89873ef5f17af403c0351ea58937ad25e146a59711e2ed4e \
                                                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:45968624ade4260b89873ef5f17af403c0351ea58937ad25e146a59711e2ed4e
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:3310e7ae7802510122e2e09b79e1fb3cf843ef69d8ddebb97147d0b726ce088b \
                                                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:3310e7ae7802510122e2e09b79e1fb3cf843ef69d8ddebb97147d0b726ce088b
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:714b59324b04a15e005a10e245743ad984782930beea1b772021bcb1c1631e60 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:714b59324b04a15e005a10e245743ad984782930beea1b772021bcb1c1631e60
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:e82c5823f552ab01ab4f9cce4b505235379bf94bb3a8349252cd6e98c874e05f \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:e82c5823f552ab01ab4f9cce4b505235379bf94bb3a8349252cd6e98c874e05f
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:4ca678cb931e360c0964c61e6052a497740cd1f61d846eb4f38f8ab3366181d8 \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:4ca678cb931e360c0964c61e6052a497740cd1f61d846eb4f38f8ab3366181d8
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:82a9e206703ccecbbb0ed40ebcd85c880c2f9e763b23c672802b21e40cd8d06b \
                                                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:82a9e206703ccecbbb0ed40ebcd85c880c2f9e763b23c672802b21e40cd8d06b
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"45968624ade4260b89873ef5f17af403c0351ea58937ad25e146a59711e2ed4e","tag_metadata":"b61-migration-b48"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"3310e7ae7802510122e2e09b79e1fb3cf843ef69d8ddebb97147d0b726ce088b","tag_metadata":"b61-migration-b48"},"iis_services_image":{"name":"is-services-image@sha256","tag":"714b59324b04a15e005a10e245743ad984782930beea1b772021bcb1c1631e60","tag_metadata":"b61-migration-b48"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:714b59324b04a15e005a10e245743ad984782930beea1b772021bcb1c1631e60
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:45968624ade4260b89873ef5f17af403c0351ea58937ad25e146a59711e2ed4e
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:3310e7ae7802510122e2e09b79e1fb3cf843ef69d8ddebb97147d0b726ce088b
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.2 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.2
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.2, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"catalog_api_image":"sha256:e82c5823f552ab01ab4f9cce4b505235379bf94bb3a8349252cd6e98c874e05f","catalog_api_aux_image":"sha256:4ca678cb931e360c0964c61e6052a497740cd1f61d846eb4f38f8ab3366181d8"}}}'
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:82a9e206703ccecbbb0ed40ebcd85c880c2f9e763b23c672802b21e40cd8d06b"}}}'
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration pod in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                                        --state=active \
                                                                                                                                                        --spec_format=json \
                                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                    
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                    cpu:                2
                                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                                  Requests:
                                                                                                                                                                                    cpu:                30m
                                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                                    memory:             128Mi
                                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                                        couchdb_search_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "2"
                                                                                                                                                            memory: 3Gi
                                                                                                                                                          requests:
                                                                                                                                                            cpu: 250m
                                                                                                                                                            memory: 256Mi
                                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "4"
                                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                          limits:
                                                                                                                                                            ephemeral-storage: 2Gi
  5. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  5. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                                              name: is-engine-image@sha256
                                                                                                                              tag: sha256:45968624ade4260b89873ef5f17af403c0351ea58937ad25e146a59711e2ed4e
                                                                                                                              tag_metadata: b61-migration-b48
                                                                                                                            iis_en_compute_image:
                                                                                                                              name: is-en-compute-image@sha256
                                                                                                                              tag: sha256:3310e7ae7802510122e2e09b79e1fb3cf843ef69d8ddebb97147d0b726ce088b
                                                                                                                              tag_metadata: b61-migration-b48
                                                                                                                            iis_services_image:
                                                                                                                              name: is-services-image@sha256
                                                                                                                              tag: sha256:714b59324b04a15e005a10e245743ad984782930beea1b772021bcb1c1631e60
                                                                                                                              tag_metadata: b61-migration-b48
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} -type=json --patch '[{"op":"remove","path":"/spec/image_digests/catalog_api_image"},{"op":"remove","path":"/spec/image_digests/catalog_api_aux_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original pod images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.2 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
Cloud Pak for Data 4.6.x (upgrades to Cloud Pak for Data 4.8.1)
Patch nameLegacy migration toolkit and IIS patches
Released onDecember 2023
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.1
Install instructions
Download the patch here.  

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                            "auths": {
                                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                            id:password>"}
                                                                                                                            }
                                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:8ac4af1de3a2901675f96085668e4436dab7841f92a26eac124dc418e726c3b1 \
                                                                                                                                <local private registry>/cp/cpd/is-engine-image@sha256:8ac4af1de3a2901675f96085668e4436dab7841f92a26eac124dc418e726c3b1
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:b8edee960339bb9ffae5da1bd8a26b3f5119772f85b470772a23523006ee4be0 \
                                                                                                                                <local private registry>/cp/cpd/is-en-compute-image@sha256:b8edee960339bb9ffae5da1bd8a26b3f5119772f85b470772a23523006ee4be0
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:00a9cbac3b6bf58787cf31026590f2ae75265fea905b8f0240d973456449f3bc \
                                                                                                                                <local private registry>/cp/cpd/is-services-image@sha256:00a9cbac3b6bf58787cf31026590f2ae75265fea905b8f0240d973456449f3bc
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:7dc22420778eebc75ad9c43e48e55d21c600c47a204b6efbce9eb5fd0944c7ee \
                                                                                                                                <local private registry>/cp/cpd/catalog-api-aux_master@sha256:7dc22420778eebc75ad9c43e48e55d21c600c47a204b6efbce9eb5fd0944c7ee
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:836da02b8e7c68baa8fc373d4b0ddceb0808d68b0f48433013de091964ad3144 \
                                                                                                                                <local private registry>/cp/cpd/legacy-migration@sha256:836da02b8e7c68baa8fc373d4b0ddceb0808d68b0f48433013de091964ad3144
                                                                                                                            
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"8ac4af1de3a2901675f96085668e4436dab7841f92a26eac124dc418e726c3b1","tag_metadata":"b60-migration-b47"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"b8edee960339bb9ffae5da1bd8a26b3f5119772f85b470772a23523006ee4be0","tag_metadata":"b60-migration-b47"},"iis_services_image":{"name":"is-services-image@sha256","tag":"00a9cbac3b6bf58787cf31026590f2ae75265fea905b8f0240d973456449f3bc","tag_metadata":"b60-migration-b47"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:PROJECT_CPD_INST_OPERANDS

       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:PROJECT_CPD_INST_OPERANDS
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:PROJECT_CPD_INST_OPERANDS
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.1 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.1
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.1, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"catalog_api_aux_image":"sha256:7dc22420778eebc75ad9c43e48e55d21c600c47a204b6efbce9eb5fd0944c7ee"}}}'
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:836da02b8e7c68baa8fc373d4b0ddceb0808d68b0f48433013de091964ad3144"}}}'
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration pod in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                                        --state=active \
                                                                                                                                                        --spec_format=json \
                                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                    
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                    cpu:                2
                                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                                  Requests:
                                                                                                                                                                                    cpu:                30m
                                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                                    memory:             128Mi
                                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                                        couchdb_search_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "2"
                                                                                                                                                            memory: 3Gi
                                                                                                                                                          requests:
                                                                                                                                                            cpu: 250m
                                                                                                                                                            memory: 256Mi
                                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "4"
                                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                          limits:
                                                                                                                                                            ephemeral-storage: 2Gi
  5. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  5. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image
                                                                                                                              name: is-engine-image@sha256
                                                                                                                              tag: sha256:8ac4af1de3a2901675f96085668e4436dab7841f92a26eac124dc418e726c3b1
                                                                                                                              tag_metadata: b60-migration-b47
                                                                                                                            iis_en_compute_image
                                                                                                                              name: is-en-compute-image@sha256
                                                                                                                              tag: sha256:b8edee960339bb9ffae5da1bd8a26b3f5119772f85b470772a23523006ee4be0
                                                                                                                              tag_metadata: b60-migration-b47
                                                                                                                            iis_services_image
                                                                                                                              name: is-services-image@sha256
                                                                                                                              tag: sha256:00a9cbac3b6bf58787cf31026590f2ae75265fea905b8f0240d973456449f3bc
                                                                                                                              tag_metadata: b60-migration-b47
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove","path": "/spec/image_digests/catalog_api_aux_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api pod in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original catalog-api-aux_master image.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.1 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
>Cloud Pak for Data 4.6.x (upgrades to Cloud Pak for Data 4.8.0)
 
Patch nameLegacy migration toolkit and IIS patches
Released onDecember 4 2023
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.6.x to 4.8.0
Install instructions
Download the patch here.  

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                            "auths": {
                                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                            id:password>"}
                                                                                                                            }
                                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:8ac4af1de3a2901675f96085668e4436dab7841f92a26eac124dc418e726c3b1 \
                                                                                                                                <local private registry>/cp/cpd/is-engine-image@sha256:8ac4af1de3a2901675f96085668e4436dab7841f92a26eac124dc418e726c3b1
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:b8edee960339bb9ffae5da1bd8a26b3f5119772f85b470772a23523006ee4be0 \
                                                                                                                                <local private registry>/cp/cpd/is-en-compute-image@sha256:b8edee960339bb9ffae5da1bd8a26b3f5119772f85b470772a23523006ee4be0
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:00a9cbac3b6bf58787cf31026590f2ae75265fea905b8f0240d973456449f3bc \
                                                                                                                                <local private registry>/cp/cpd/is-services-image@sha256:00a9cbac3b6bf58787cf31026590f2ae75265fea905b8f0240d973456449f3bc
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:dde2aef130c15f9b594d99ae0d06961be459a78631f5f6f7e13902bd09418e81 \
                                                                                                                                <local private registry>/cp/cpd/catalog_master@sha256:dde2aef130c15f9b594d99ae0d06961be459a78631f5f6f7e13902bd09418e81
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:0f6ad93e8bd667eb2812c8387e7b1091a012b0ad3b328bdffe33a07ef1f42ecc \
                                                                                                                                <local private registry>/cp/cpd/catalog-api-aux_master@sha256:0f6ad93e8bd667eb2812c8387e7b1091a012b0ad3b328bdffe33a07ef1f42ecc
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:766718cb148fa986fa2bea03e4649e5c8f5a32e9479484972e346fbfa8bdd4a9 \
                                                                                                                                <local private registry>/cp/cpd/portal-job-manager@sha256:766718cb148fa986fa2bea03e4649e5c8f5a32e9479484972e346fbfa8bdd4a9
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:27731a4eb07c6e52eaf64da50cf1e7b773a1ecca1906d1096e0ffcea71790781 \
                                                                                                                                <local private registry>/cp/cpd/legacy-migration@sha256:27731a4eb07c6e52eaf64da50cf1e7b773a1ecca1906d1096e0ffcea71790781
                                                                                                                            
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.  

To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"8ac4af1de3a2901675f96085668e4436dab7841f92a26eac124dc418e726c3b1","tag_metadata":"b60-migration-b47"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"b8edee960339bb9ffae5da1bd8a26b3f5119772f85b470772a23523006ee4be0","tag_metadata":"b60-migration-b47"},"iis_services_image":{"name":"is-services-image@sha256","tag":"00a9cbac3b6bf58787cf31026590f2ae75265fea905b8f0240d973456449f3bc","tag_metadata":"b60-migration-b47"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:00a9cbac3b6bf58787cf31026590f2ae75265fea905b8f0240d973456449f3bc
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:8ac4af1de3a2901675f96085668e4436dab7841f92a26eac124dc418e726c3b1
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:b8edee960339bb9ffae5da1bd8a26b3f5119772f85b470772a23523006ee4be0
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.8.0 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.8.0
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.8.0, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"catalog_api_image":"sha256:dde2aef130c15f9b594d99ae0d06961be459a78631f5f6f7e13902bd09418e81","catalog_api_aux_image":"sha256:0f6ad93e8bd667eb2812c8387e7b1091a012b0ad3b328bdffe33a07ef1f42ecc","portal_job_manager_image":"sha256:766718cb148fa986fa2bea03e4649e5c8f5a32e9479484972e346fbfa8bdd4a9"}}}'
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:27731a4eb07c6e52eaf64da50cf1e7b773a1ecca1906d1096e0ffcea71790781"}}}'
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration pod in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch for scaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                                      rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                                        --state=active \
                                                                                                                                                        --spec_format=json \
                                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                    
  4. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --patch_name=rsi-pjm-scaling

      The following is an example of the output:

      PATCH_NAME       PATCH_ID  PATCH_INJECTION_STATUS  NAMESPACE  POD_NAME                             READY  POD_STATUS
                                                                                                                                                              rsi-pjm-scaling  206       True                    wkc        portal-job-manager-7f857765c9-qpqh9  1/1    Running
                                                                                                                                                              

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                    cpu:                2
                                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                                  Requests:
                                                                                                                                                                                    cpu:                30m
                                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                                    memory:             128Mi
                                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                                        couchdb_search_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "2"
                                                                                                                                                            memory: 3Gi
                                                                                                                                                          requests:
                                                                                                                                                            cpu: 250m
                                                                                                                                                            memory: 256Mi
                                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "4"
                                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                          limits:
                                                                                                                                                            ephemeral-storage: 2Gi
  5. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.8 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 5.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  5. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image
                                                                                                                              name: is-engine-image@sha256
                                                                                                                              tag: sha256:8ac4af1de3a2901675f96085668e4436dab7841f92a26eac124dc418e726c3b1
                                                                                                                              tag_metadata: b60-migration-b47
                                                                                                                            iis_en_compute_image
                                                                                                                              name: is-en-compute-image@sha256
                                                                                                                              tag: sha256:b8edee960339bb9ffae5da1bd8a26b3f5119772f85b470772a23523006ee4be0
                                                                                                                              tag_metadata: b60-migration-b47
                                                                                                                            iis_services_image
                                                                                                                              name: is-services-image@sha256
                                                                                                                              tag: sha256:00a9cbac3b6bf58787cf31026590f2ae75265fea905b8f0240d973456449f3bc
                                                                                                                              tag_metadata: b60-migration-b47
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove", "path": "/spec/image_digests/catalog_api_image"},{ "op": "remove","path": "/spec/image_digests/catalog_api_aux_image"},{ "op": "remove","path": "/spec/image_digests/portal_job_manager_image"}]'
  2. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.8.0 cluster, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
Cloud Pak for Data 4.6.x and 4.5.x patches (upgrades to Cloud Pak for Data 4.7.x)
 
 
Patch nameLegacy migration toolkit and IIS patches
Released on3 July  2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.5.x
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.5.x
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.5.x or 4.6.x to 4.7.x
Install instructions
Download the patch here.  

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                            "auths": {
                                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                            id:password>"}
                                                                                                                            }
                                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c \
                                                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7 \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:e955b2258c9a98a33ba07d00603a6f3614c93a78d5a7011c0cf6763ae48b9916 \
                                                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:e955b2258c9a98a33ba07d00603a6f3614c93a78d5a7011c0cf6763ae48b9916
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49 \
                                                                                                                                docker://<local private registry>/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.
 
To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.5.x or 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2","tag_metadata":"b71-migration-b54"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551","tag_metadata":"b71-migration-b54"},"iis_services_image":{"name":"is-services-image@sha256","tag":"a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59","tag_metadata":"b71-migration-b54"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.5.x or 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.5.x or 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.7.x after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.7.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.7.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c","catalog_api_aux_image":"sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7","catalog_api_image":"sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce"}}}'
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:e955b2258c9a98a33ba07d00603a6f3614c93a78d5a7011c0cf6763ae48b9916","wdp_profiling_image":"sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49","wkc_mde_service_manager_image":"sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4","wkc_data_rules_image":"sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8"}}}'
                                                                                                                            
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration, wdp-profiling, wkc-mde-service-manager, and wkc-data-rules pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Applying a hotfix

Applying the ZEN hotfix  

Note: Applying this hotfix is only meant for users upgrading to Version 4.7.x.
 
The following section details how to apply the RSI patch needed later on in the migration toolkit and patch process.
 
Applying the ZEN hotfix with the online IBM registry

For clusters configured to use the online IBM production registry, follow the steps below:
 
  1. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
                                                                                                                            
  2. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
                                                                                                                            
  3. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Applying the ZEN hotfix with a private container registry
 
For airgap install clusters which need to download the amd64 image to the local private registry, follow the steps below:
 
  1. Export the following variables after setting the correct value that contains credentials to access icr.io and your local private registry. For example:

    export IBM_ENTITLEMENT_KEY=<IBM Entitlement API Key>
                                                                                                                            export PRIVATE_REGISTRY_LOCATION=<Local private registry hostname>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_USER=<Private Registry login username>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_PASSWORD=<Private Registry login password>
  2. Using the exported environment variables created above, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo login cp.icr.io -u cp -p ${IBM_ENTITLEMENT_KEY}
                                                                                                                            skopeo login ${PRIVATE_REGISTRY_LOCATION} -u ${PRIVATE_REGISTRY_PUSH_USER} -p ${PRIVATE_REGISTRY_PUSH_PASSWORD}
    skopeo copy --all --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                        docker://icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50 \
                                                                                                                                     ${PRIVATE_REGISTRY_LOCATION}/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50
  3. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
  4. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
  5. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch forscaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Apply a router issue fix 
    1. Apply the router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                              kind: ZenExtension
                                                                                                                                              metadata:
                                                                                                                                                labels:
                                                                                                                                                  app: wkc-lite
                                                                                                                                                  app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                  app.kubernetes.io/managed-by: Tiller
                                                                                                                                                  app.kubernetes.io/name: wkc-lite
                                                                                                                                                  chart: wkc-lite
                                                                                                                                                  helm.sh/chart: wkc-lite
                                                                                                                                                  heritage: Tiller
                                                                                                                                                  release: 0075-wkc-lite
                                                                                                                                                name: wkc-routes-5588
                                                                                                                                                namespace: $WKC_NAMESPACE
                                                                                                                                              spec:
                                                                                                                                                extensions: |
                                                                                                                                                  [
                                                                                                                                                    {
                                                                                                                                                        "extension_point_id": "zen_front_door",
                                                                                                                                                        "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                        "details": {
                                                                                                                                                          "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                        }
                                                                                                                                                    }
                                                                                                                                                  ]
                                                                                                                                                wkc-routes-extn.conf: |-
                                                                                                                                                  set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                  location /metadata_enrichment/v3 {
                                                                                                                                                    proxy_set_header Host $host;
                                                                                                                                                    proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                    proxy_ssl_verify       on;
                                                                                                                                                    proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                    proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                    proxy_ssl_server_name  on;
                                                                                                                                                    proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                  }
                                                                                                                                              
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                          {
                                                                                                                                                              "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                              "name": "rsi-pjm-scaling",
                                                                                                                                                              "namespace": "wkc",
                                                                                                                                                              "patch_info": [
                                                                                                                                                                  {
                                                                                                                                                                      "description": "This",
                                                                                                                                                                      "details": {
                                                                                                                                                                          "patch_spec": [
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                  "value": "2"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              }
                                                                                                                                                                          ],
                                                                                                                                                                          "pod_selector": {
                                                                                                                                                                              "selector": {
                                                                                                                                                                                  "app": "portal-job-manager"
                                                                                                                                                                              }
                                                                                                                                                                          },
                                                                                                                                                                          "state": "active",
                                                                                                                                                                          "type": "json"
                                                                                                                                                                      },
                                                                                                                                                                      "display_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                      "meta": {}
                                                                                                                                                                  }
                                                                                                                                                              ]
                                                                                                                                                          }
                                                                                                                                                      ]
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                                        --state=active \
                                                                                                                                                        --spec_format=json \
                                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                    
  4. Apply a router issue fix 
    1. Apply a router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                                      kind: ZenExtension
                                                                                                                                                      metadata:
                                                                                                                                                        labels:
                                                                                                                                                          app: wkc-lite
                                                                                                                                                          app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                          app.kubernetes.io/managed-by: Tiller
                                                                                                                                                          app.kubernetes.io/name: wkc-lite
                                                                                                                                                          chart: wkc-lite
                                                                                                                                                          helm.sh/chart: wkc-lite
                                                                                                                                                          heritage: Tiller
                                                                                                                                                          release: 0075-wkc-lite
                                                                                                                                                        name: wkc-routes-5588
                                                                                                                                                        namespace: $WKC_NAMESPACE
                                                                                                                                                      spec:
                                                                                                                                                        extensions: |
                                                                                                                                                          [
                                                                                                                                                            {
                                                                                                                                                                "extension_point_id": "zen_front_door",
                                                                                                                                                                "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                                "details": {
                                                                                                                                                                  "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                                }
                                                                                                                                                            }
                                                                                                                                                          ]
                                                                                                                                                        wkc-routes-extn.conf: |-
                                                                                                                                                          set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                          location /metadata_enrichment/v3 {
                                                                                                                                                            proxy_set_header Host $host;
                                                                                                                                                            proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                            proxy_ssl_verify       on;
                                                                                                                                                            proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                            proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                            proxy_ssl_server_name  on;
                                                                                                                                                            proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                          }
                                                                                                                                                      
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                                  {
                                                                                                                                                                      "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                                      "name": "rsi-pjm-scaling",
                                                                                                                                                                      "namespace": "wkc",
                                                                                                                                                                      "patch_info": [
                                                                                                                                                                          {
                                                                                                                                                                              "description": "This",
                                                                                                                                                                              "details": {
                                                                                                                                                                                  "patch_spec": [
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                          "value": "2"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                          "value": "8Gi"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                          "value": "12Gi"
                                                                                                                                                                                      }
                                                                                                                                                                                  ],
                                                                                                                                                                                  "pod_selector": {
                                                                                                                                                                                      "selector": {
                                                                                                                                                                                          "app": "portal-job-manager"
                                                                                                                                                                                      }
                                                                                                                                                                                  },
                                                                                                                                                                                  "state": "active",
                                                                                                                                                                                  "type": "json"
                                                                                                                                                                              },
                                                                                                                                                                              "display_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                              "meta": {}
                                                                                                                                                                          }
                                                                                                                                                                      ]
                                                                                                                                                                  }
                                                                                                                                                              ]

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                    cpu:                2
                                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                                  Requests:
                                                                                                                                                                                    cpu:                30m
                                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                                    memory:             128Mi
                                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                                        couchdb_search_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "2"
                                                                                                                                                            memory: 3Gi
                                                                                                                                                          requests:
                                                                                                                                                            cpu: 250m
                                                                                                                                                            memory: 256Mi
                                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "4"
                                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                          limits:
                                                                                                                                                            ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.5 or 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.7 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 6.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                                              name: is-engine-image@sha256
                                                                                                                              tag: sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                              tag_metadata: b71-migration-b54
                                                                                                                            iis_en_compute_image:
                                                                                                                              name: is-en-compute-image@sha256
                                                                                                                              tag: sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                                              tag_metadata: b71-migration-b54
                                                                                                                            iis_services_image:
                                                                                                                              name: is-services-image@sha256
                                                                                                                              tag: sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                              tag_metadata: b71-migration-b54
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove", "path": "/spec/image_digests/catalog_api_image"},{ "op": "remove","path": "/spec/image_digests/catalog_api_aux_image"},{ "op": "remove","path": "/spec/image_digests/portal_job_manager_image"}]'
  2. Run the following command to remove image updates from the WKC custom resource:

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove","path": "/spec/image_digests/wkc_data_rules_image"},{ "op": "remove","path": "/spec/image_digests/wdp_profiling_image"},{ "op": "remove","path": "/spec/image_digests/wkc_mde_service_manager_image"}]'
  3. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
  4. Wait for the WKC operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the wkc-data-rules, wdp-profiling, and wkc-mde-service-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.7.x clusters, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
Cloud Pak for Data 4.6.x and 4.5.x patches(upgrades to Cloud Pak for Data 4.7.3/4.7.4 version 8
 

 

 
Patch nameLegacy migration toolkit and IIS patches
Released on3 July  2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.5.x
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.5.x
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.5.x or 4.6.x to 4.7.3/4.7.4
Install instructions
Download patch legacy-migration-patch_508.  

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                            "auths": {
                                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                            id:password>"}
                                                                                                                            }
                                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c \
                                                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7 \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:e955b2258c9a98a33ba07d00603a6f3614c93a78d5a7011c0cf6763ae48b9916 \
                                                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:e955b2258c9a98a33ba07d00603a6f3614c93a78d5a7011c0cf6763ae48b9916
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49 \
                                                                                                                                docker://<local private registry>/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.
 
To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.5.x or 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2","tag_metadata":"b71-migration-b54"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551","tag_metadata":"b71-migration-b54"},"iis_services_image":{"name":"is-services-image@sha256","tag":"a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59","tag_metadata":"b71-migration-b54"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.5.x or 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.5.x or 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.7.x after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.7.x
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.7.x, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c","catalog_api_aux_image":"sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7","catalog_api_image":"sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce"}}}'
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:e955b2258c9a98a33ba07d00603a6f3614c93a78d5a7011c0cf6763ae48b9916","wdp_profiling_image":"sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49","wkc_mde_service_manager_image":"sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4","wkc_data_rules_image":"sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8"}}}'
                                                                                                                            
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration, wdp-profiling, wkc-mde-service-manager, and wkc-data-rules pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Applying a hotfix

Applying the ZEN hotfix  

Note: Applying this hotfix is only meant for users upgrading to Version 4.7.x.
 
The following section details how to apply the RSI patch needed later on in the migration toolkit and patch process.
 
Applying the ZEN hotfix with the online IBM registry

For clusters configured to use the online IBM production registry, follow the steps below:
 
  1. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
                                                                                                                            
  2. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
                                                                                                                            
  3. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Applying the ZEN hotfix with a private container registry
 
For airgap install clusters which need to download the amd64 image to the local private registry, follow the steps below:
 
  1. Export the following variables after setting the correct value that contains credentials to access icr.io and your local private registry. For example:

    export IBM_ENTITLEMENT_KEY=<IBM Entitlement API Key>
                                                                                                                            export PRIVATE_REGISTRY_LOCATION=<Local private registry hostname>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_USER=<Private Registry login username>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_PASSWORD=<Private Registry login password>
  2. Using the exported environment variables created above, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo login cp.icr.io -u cp -p ${IBM_ENTITLEMENT_KEY}
                                                                                                                            skopeo login ${PRIVATE_REGISTRY_LOCATION} -u ${PRIVATE_REGISTRY_PUSH_USER} -p ${PRIVATE_REGISTRY_PUSH_PASSWORD}
    skopeo copy --all --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                        docker://icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50 \
                                                                                                                                     ${PRIVATE_REGISTRY_LOCATION}/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50
  3. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
  4. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
  5. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 

Tuning for export

Tuning medium and large clusters for export
  1. Edit the iis-cr:

    oc edit iis iis-cr
  2. Search for the ignoreForMaintenance flag and change it to true:

    ignoreForMaintenance: true
  3. For Java heap, run the following:
    1. Change the java heap max size of the iis-services pod by:

      oc edit cm iis-server
    2. Search for -Xmx.
    3. Change the default value from -Xmx8192m to -Xmx16384m. This will make the size 16GB for mid-size and large-size clusters.
  4. Max objects in memory:
    1. Login to the iis-services pod.
    2. Increase the max objects in memory for mid-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 5000000
    3. Increase the max objects in memory for large-size clusters:

      /opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.gov.vr.setting.maxObjectsInMemory -value 10000000
  5. Change the limits in the iis-services deployment:
    1. For mid-size clusters:
      1. Run the following:

        oc edit deploy iis-services

        Search for "limits" and change:

        limits:
                                                                                                                                                                                cpu: "4"
                                                                                                                                                                                memory: 8Gi


        To:

        limits:
                                                                                                                                                                                cpu: “8”
                                                                                                                                                                                memory: 16Gi


         

    2. For large-size clusters:

      Run the following:

      oc edit deploy iis-services

      Search for "limits" and change:

      limits:
                                                                                                                                                                      cpu: "4"
                                                                                                                                                                      memory: 8Gi


      To:

      limits:
                                                                                                                                                                      cpu: “16”
                                                                                                                                                                      memory: 32Gi
  6. Check which iis-services pod is the worker node scheduled on:

    oc get pods -o wide |grep iis-services
  7. Make sure that worker node has sufficient resources:

    oc adm top nodes


    If cpu usage and memory usage percentage is less than 80 percent, then leave everything as is.  
    If cpu usage and memory usage is more than 80 percent then continue with the following steps.

  8. Choose one node which has more free memory and CPU. In this example, worker3 has more free CPU and memory, and the iis-services pod is on worker4:
    1. Cordon all other nodes using the following command, except for the worker node with the free memory and CPU:

      oc adm cordon worker1
                                                                                                                                                              oc adm cordon worker2
                                                                                                                                                              oc adm cordon worker4
  9. Delete the iis-services pod to push this pod to worker3:

    oc delete pod iis-services-xxxxx


    This will schedule the iis-services pod on to worker3.

  10. Once the iis-services pod is on worker3, cordon worker3 and uncordon all the other worker nodes to make sure that no other pod is scheduled on worker3:

    oc adm cordon worker3
                                                                                                                                            oc adm uncordon worker1
                                                                                                                                            oc adm uncordon worker2
                                                                                                                                            oc adm uncordon worker4
 
 

Tuning for import

Run the following commands to save your current cluster CCS and WKC settings:

oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                              --patch_type=rsi_pod_spec \
                                                                                                                                                              --patch_name=pjm-scaling \
                                                                                                                                                              --description="This is spec patch forscaling PJM" \
                                                                                                                                                              --include_labels=app:portal-job-manager \
                                                                                                                                                              --state=active \
                                                                                                                                                              --spec_format=json \
                                                                                                                                                              --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Apply a router issue fix 
    1. Apply the router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                                                          kind: ZenExtension
                                                                                                                                                                          metadata:
                                                                                                                                                                            labels:
                                                                                                                                                                              app: wkc-lite
                                                                                                                                                                              app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                                              app.kubernetes.io/managed-by: Tiller
                                                                                                                                                                              app.kubernetes.io/name: wkc-lite
                                                                                                                                                                              chart: wkc-lite
                                                                                                                                                                              helm.sh/chart: wkc-lite
                                                                                                                                                                              heritage: Tiller
                                                                                                                                                                              release: 0075-wkc-lite
                                                                                                                                                                            name: wkc-routes-5588
                                                                                                                                                                            namespace: $WKC_NAMESPACE
                                                                                                                                                                          spec:
                                                                                                                                                                            extensions: |
                                                                                                                                                                              [
                                                                                                                                                                                {
                                                                                                                                                                                    "extension_point_id": "zen_front_door",
                                                                                                                                                                                    "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                                                    "details": {
                                                                                                                                                                                      "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                                                    }
                                                                                                                                                                                }
                                                                                                                                                                              ]
                                                                                                                                                                            wkc-routes-extn.conf: |-
                                                                                                                                                                              set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                                              location /metadata_enrichment/v3 {
                                                                                                                                                                                proxy_set_header Host $host;
                                                                                                                                                                                proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                                                proxy_ssl_verify       on;
                                                                                                                                                                                proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                                                proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                                                proxy_ssl_server_name  on;
                                                                                                                                                                                proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                                              }
                                                                                                                                                                         
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                                              {
                                                                                                                                                                                  "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                                                  "name": "rsi-pjm-scaling",
                                                                                                                                                                                  "namespace": "wkc",
                                                                                                                                                                                  "patch_info": [
                                                                                                                                                                                      {
                                                                                                                                                                                          "description": "This",
                                                                                                                                                                                          "details": {
                                                                                                                                                                                              "patch_spec": [
                                                                                                                                                                                                  {
                                                                                                                                                                                                      "op": "replace",
                                                                                                                                                                                                      "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                                      "value": "2"
                                                                                                                                                                                                  },
                                                                                                                                                                                                  {
                                                                                                                                                                                                      "op": "replace",
                                                                                                                                                                                                      "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                                      "value": "8Gi"
                                                                                                                                                                                                  },
                                                                                                                                                                                                  {
                                                                                                                                                                                                      "op": "replace",
                                                                                                                                                                                                      "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                                      "value": "8Gi"
                                                                                                                                                                                                  }
                                                                                                                                                                                              ],
                                                                                                                                                                                              "pod_selector": {
                                                                                                                                                                                                  "selector": {
                                                                                                                                                                                                      "app": "portal-job-manager"
                                                                                                                                                                                                  }
                                                                                                                                                                                              },
                                                                                                                                                                                              "state": "active",
                                                                                                                                                                                              "type": "json"
                                                                                                                                                                                          },
                                                                                                                                                                                          "display_name": "rsi-pjm-scaling",
                                                                                                                                                                                          "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                                          "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                                          "meta": {}
                                                                                                                                                                                      }
                                                                                                                                                                                  ]
                                                                                                                                                                              }
                                                                                                                                                                          ]
                                                                                                                                                                                  

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                              cpu:                2
                                                                                                                                                                                              ephemeral-storage:  8Gi
                                                                                                                                                                                              memory:             8Gi
                                                                                                                                                                                            Requests:
                                                                                                                                                                                              cpu:                30m
                                                                                                                                                                                              ephemeral-storage:  10Mi
                                                                                                                                                                                              memory:             128Mi
                                                                                                                                                                                                  
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

      catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                                -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                                              catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                              catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                              catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                              catalog_api_properties_global_call_logs: "false"
                                                                                                                                                              couchdb_search_resources:
                                                                                                                                                                limits:
                                                                                                                                                                  cpu: "2"
                                                                                                                                                                  memory: 3Gi
                                                                                                                                                                requests:
                                                                                                                                                                  cpu: 250m
                                                                                                                                                                  memory: 256Mi
                                                                                                                                                                          
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

      wkc_data_rules_resources:
                                                                                                                                                                limits:
                                                                                                                                                                  ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                                                  
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                                                  
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                                                        --state=active \
                                                                                                                                                                        --spec_format=json \
                                                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                                                  
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                                                
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                                                
  4. Apply a router issue fix 
    1. Apply a router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                                                                  kind: ZenExtension
                                                                                                                                                                                  metadata:
                                                                                                                                                                                    labels:
                                                                                                                                                                                      app: wkc-lite
                                                                                                                                                                                      app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                                                      app.kubernetes.io/managed-by: Tiller
                                                                                                                                                                                      app.kubernetes.io/name: wkc-lite
                                                                                                                                                                                      chart: wkc-lite
                                                                                                                                                                                      helm.sh/chart: wkc-lite
                                                                                                                                                                                      heritage: Tiller
                                                                                                                                                                                      release: 0075-wkc-lite
                                                                                                                                                                                    name: wkc-routes-5588
                                                                                                                                                                                    namespace: $WKC_NAMESPACE
                                                                                                                                                                                  spec:
                                                                                                                                                                                    extensions: |
                                                                                                                                                                                      [
                                                                                                                                                                                        {
                                                                                                                                                                                            "extension_point_id": "zen_front_door",
                                                                                                                                                                                            "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                                                            "details": {
                                                                                                                                                                                              "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                                                            }
                                                                                                                                                                                        }
                                                                                                                                                                                      ]
                                                                                                                                                                                    wkc-routes-extn.conf: |-
                                                                                                                                                                                      set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                                                      location /metadata_enrichment/v3 {
                                                                                                                                                                                        proxy_set_header Host $host;
                                                                                                                                                                                        proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                                                        proxy_ssl_verify       on;
                                                                                                                                                                                        proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                                                        proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                                                        proxy_ssl_server_name  on;
                                                                                                                                                                                        proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                                                      }
                                                                                                                                                                                  
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                                                      {
                                                                                                                                                                                          "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                                                          "name": "rsi-pjm-scaling",
                                                                                                                                                                                          "namespace": "wkc",
                                                                                                                                                                                          "patch_info": [
                                                                                                                                                                                              {
                                                                                                                                                                                                  "description": "This",
                                                                                                                                                                                                  "details": {
                                                                                                                                                                                                      "patch_spec": [
                                                                                                                                                                                                          {
                                                                                                                                                                                                              "op": "replace",
                                                                                                                                                                                                              "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                                              "value": "2"
                                                                                                                                                                                                          },
                                                                                                                                                                                                          {
                                                                                                                                                                                                              "op": "replace",
                                                                                                                                                                                                              "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                                              "value": "8Gi"
                                                                                                                                                                                                          },
                                                                                                                                                                                                          {
                                                                                                                                                                                                              "op": "replace",
                                                                                                                                                                                                              "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                                              "value": "12Gi"
                                                                                                                                                                                                          }
                                                                                                                                                                                                      ],
                                                                                                                                                                                                      "pod_selector": {
                                                                                                                                                                                                          "selector": {
                                                                                                                                                                                                              "app": "portal-job-manager"
                                                                                                                                                                                                          }
                                                                                                                                                                                                      },
                                                                                                                                                                                                      "state": "active",
                                                                                                                                                                                                      "type": "json"
                                                                                                                                                                                                  },
                                                                                                                                                                                                  "display_name": "rsi-pjm-scaling",
                                                                                                                                                                                                  "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                                                  "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                                                  "meta": {}
                                                                                                                                                                                              }
                                                                                                                                                                                          ]
                                                                                                                                                                                      }
                                                                                                                                                                                  ]

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                                        cpu:                2
                                                                                                                                                                                                        ephemeral-storage:  12Gi
                                                                                                                                                                                                        memory:             8Gi
                                                                                                                                                                                                      Requests:
                                                                                                                                                                                                         cpu:                30m
                                                                                                                                                                                                         ephemeral-storage:  10Mi
                                                                                                                                                                                                         memory:             128Mi
                                                                                                                                                                                                          
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

      catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                                      catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                                      catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                                      catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                                      catalog_api_properties_global_call_logs: "false"
                                                                                                                                                                      couchdb_search_resources:
                                                                                                                                                                        limits:
                                                                                                                                                                          cpu: "2"
                                                                                                                                                                          memory: 3Gi
                                                                                                                                                                        requests:
                                                                                                                                                                          cpu: 250m
                                                                                                                                                                          memory: 256Mi
                                                                                                                                                                      dap_base_asset_files_resources:
                                                                                                                                                                        limits:
                                                                                                                                                                          cpu: "4"
                                                                                                                                                                          memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

      wkc_data_rules_resources:
                                                                                                                                                                        limits:
                                                                                                                                                                          ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.5 or 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.7 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 6.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                                              name: is-engine-image@sha256
                                                                                                                              tag: sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                              tag_metadata: b71-migration-b54
                                                                                                                            iis_en_compute_image:
                                                                                                                              name: is-en-compute-image@sha256
                                                                                                                              tag: sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                                              tag_metadata: b71-migration-b54
                                                                                                                            iis_services_image:
                                                                                                                              name: is-services-image@sha256
                                                                                                                              tag: sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                              tag_metadata: b71-migration-b54
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove", "path": "/spec/image_digests/catalog_api_image"},{ "op": "remove","path": "/spec/image_digests/catalog_api_aux_image"},{ "op": "remove","path": "/spec/image_digests/portal_job_manager_image"}]'
  2. Run the following command to remove image updates from the WKC custom resource:

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove","path": "/spec/image_digests/wkc_data_rules_image"},{ "op": "remove","path": "/spec/image_digests/wdp_profiling_image"},{ "op": "remove","path": "/spec/image_digests/wkc_mde_service_manager_image"}]'
  3. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
  4. Wait for the WKC operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the wkc-data-rules, wdp-profiling, and wkc-mde-service-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.7.x clusters, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
Cloud Pak for Data 4.6.x and 4.5.x patches(upgrades to Cloud Pak for D ata 4.7.3/4.7.4 version 7)
 
 
Patch nameLegacy migration toolkit and IIS patches
Released onJune 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.5.x
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.5.x
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.5.x or 4.6.x to 4.7.3/4.7.4
Install instructions
Download the patch here.  

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                            "auths": {
                                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                            id:password>"}
                                                                                                                            }
                                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c \
                                                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7 \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:c48ae7c6b342ec3279883e77238015d9b96263c91c5c0aafbdb148c9e78a2bf6 \
                                                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:c48ae7c6b342ec3279883e77238015d9b96263c91c5c0aafbdb148c9e78a2bf6
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49 \
                                                                                                                                docker://<local private registry>/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.
 
To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.5.x or 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2","tag_metadata":"b71-migration-b54"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551","tag_metadata":"b71-migration-b54"},"iis_services_image":{"name":"is-services-image@sha256","tag":"a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59","tag_metadata":"b71-migration-b54"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.5.x or 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.5.x or 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.7.3/4.7.4 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.7.3/4.7.4
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.7.3/4.7.4, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c","catalog_api_aux_image":"sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7","catalog_api_image":"sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce"}}}'
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:c48ae7c6b342ec3279883e77238015d9b96263c91c5c0aafbdb148c9e78a2bf6","wdp_profiling_image":"sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49","wkc_mde_service_manager_image":"sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4","wkc_data_rules_image":"sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8"}}}'
                                                                                                                            
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration, wdp-profiling, wkc-mde-service-manager, and wkc-data-rules pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Applying a hotfix

Applying the ZEN hotfix  

Note: Applying this hotfix is only meant for users upgrading to Version 4.7.3/4.7.4.
 
The following section details how to apply the RSI patch needed later on in the migration toolkit and patch process.
 
Applying the ZEN hotfix with the online IBM registry

For clusters configured to use the online IBM production registry, follow the steps below:
 
  1. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
                                                                                                                            
  2. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
                                                                                                                            
  3. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Applying the ZEN hotfix with a private container registry
 
For airgap install clusters which need to download the amd64 image to the local private registry, follow the steps below:
 
  1. Export the following variables after setting the correct value that contains credentials to access icr.io and your local private registry. For example:

    export IBM_ENTITLEMENT_KEY=<IBM Entitlement API Key>
                                                                                                                            export PRIVATE_REGISTRY_LOCATION=<Local private registry hostname>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_USER=<Private Registry login username>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_PASSWORD=<Private Registry login password>
  2. Using the exported environment variables created above, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo login cp.icr.io -u cp -p ${IBM_ENTITLEMENT_KEY}
                                                                                                                            skopeo login ${PRIVATE_REGISTRY_LOCATION} -u ${PRIVATE_REGISTRY_PUSH_USER} -p ${PRIVATE_REGISTRY_PUSH_PASSWORD}
    skopeo copy --all --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                        docker://icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50 \
                                                                                                                                     ${PRIVATE_REGISTRY_LOCATION}/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50
  3. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
  4. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
  5. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch forscaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Apply a router issue fix 
    1. Apply the router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                              kind: ZenExtension
                                                                                                                                              metadata:
                                                                                                                                                labels:
                                                                                                                                                  app: wkc-lite
                                                                                                                                                  app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                  app.kubernetes.io/managed-by: Tiller
                                                                                                                                                  app.kubernetes.io/name: wkc-lite
                                                                                                                                                  chart: wkc-lite
                                                                                                                                                  helm.sh/chart: wkc-lite
                                                                                                                                                  heritage: Tiller
                                                                                                                                                  release: 0075-wkc-lite
                                                                                                                                                name: wkc-routes-5588
                                                                                                                                                namespace: $WKC_NAMESPACE
                                                                                                                                              spec:
                                                                                                                                                extensions: |
                                                                                                                                                  [
                                                                                                                                                    {
                                                                                                                                                        "extension_point_id": "zen_front_door",
                                                                                                                                                        "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                        "details": {
                                                                                                                                                          "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                        }
                                                                                                                                                    }
                                                                                                                                                  ]
                                                                                                                                                wkc-routes-extn.conf: |-
                                                                                                                                                  set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                  location /metadata_enrichment/v3 {
                                                                                                                                                    proxy_set_header Host $host;
                                                                                                                                                    proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                    proxy_ssl_verify       on;
                                                                                                                                                    proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                    proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                    proxy_ssl_server_name  on;
                                                                                                                                                    proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                  }
                                                                                                                                              
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                          {
                                                                                                                                                              "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                              "name": "rsi-pjm-scaling",
                                                                                                                                                              "namespace": "wkc",
                                                                                                                                                              "patch_info": [
                                                                                                                                                                  {
                                                                                                                                                                      "description": "This",
                                                                                                                                                                      "details": {
                                                                                                                                                                          "patch_spec": [
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                  "value": "2"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              }
                                                                                                                                                                          ],
                                                                                                                                                                          "pod_selector": {
                                                                                                                                                                              "selector": {
                                                                                                                                                                                  "app": "portal-job-manager"
                                                                                                                                                                              }
                                                                                                                                                                          },
                                                                                                                                                                          "state": "active",
                                                                                                                                                                          "type": "json"
                                                                                                                                                                      },
                                                                                                                                                                      "display_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                      "meta": {}
                                                                                                                                                                  }
                                                                                                                                                              ]
                                                                                                                                                          }
                                                                                                                                                      ]
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                                        --state=active \
                                                                                                                                                        --spec_format=json \
                                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                    
  4. Apply a router issue fix 
    1. Apply a router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                                      kind: ZenExtension
                                                                                                                                                      metadata:
                                                                                                                                                        labels:
                                                                                                                                                          app: wkc-lite
                                                                                                                                                          app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                          app.kubernetes.io/managed-by: Tiller
                                                                                                                                                          app.kubernetes.io/name: wkc-lite
                                                                                                                                                          chart: wkc-lite
                                                                                                                                                          helm.sh/chart: wkc-lite
                                                                                                                                                          heritage: Tiller
                                                                                                                                                          release: 0075-wkc-lite
                                                                                                                                                        name: wkc-routes-5588
                                                                                                                                                        namespace: $WKC_NAMESPACE
                                                                                                                                                      spec:
                                                                                                                                                        extensions: |
                                                                                                                                                          [
                                                                                                                                                            {
                                                                                                                                                                "extension_point_id": "zen_front_door",
                                                                                                                                                                "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                                "details": {
                                                                                                                                                                  "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                                }
                                                                                                                                                            }
                                                                                                                                                          ]
                                                                                                                                                        wkc-routes-extn.conf: |-
                                                                                                                                                          set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                          location /metadata_enrichment/v3 {
                                                                                                                                                            proxy_set_header Host $host;
                                                                                                                                                            proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                            proxy_ssl_verify       on;
                                                                                                                                                            proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                            proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                            proxy_ssl_server_name  on;
                                                                                                                                                            proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                          }
                                                                                                                                                      
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                                  {
                                                                                                                                                                      "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                                      "name": "rsi-pjm-scaling",
                                                                                                                                                                      "namespace": "wkc",
                                                                                                                                                                      "patch_info": [
                                                                                                                                                                          {
                                                                                                                                                                              "description": "This",
                                                                                                                                                                              "details": {
                                                                                                                                                                                  "patch_spec": [
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                          "value": "2"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                          "value": "8Gi"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                          "value": "12Gi"
                                                                                                                                                                                      }
                                                                                                                                                                                  ],
                                                                                                                                                                                  "pod_selector": {
                                                                                                                                                                                      "selector": {
                                                                                                                                                                                          "app": "portal-job-manager"
                                                                                                                                                                                      }
                                                                                                                                                                                  },
                                                                                                                                                                                  "state": "active",
                                                                                                                                                                                  "type": "json"
                                                                                                                                                                              },
                                                                                                                                                                              "display_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                              "meta": {}
                                                                                                                                                                          }
                                                                                                                                                                      ]
                                                                                                                                                                  }
                                                                                                                                                              ]

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                    cpu:                2
                                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                                  Requests:
                                                                                                                                                                                    cpu:                30m
                                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                                    memory:             128Mi
                                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                                        couchdb_search_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "2"
                                                                                                                                                            memory: 3Gi
                                                                                                                                                          requests:
                                                                                                                                                            cpu: 250m
                                                                                                                                                            memory: 256Mi
                                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "4"
                                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                          limits:
                                                                                                                                                            ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.5 or 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.7 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 6.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                                              name: is-engine-image@sha256
                                                                                                                              tag: sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                              tag_metadata: b71-migration-b54
                                                                                                                            iis_en_compute_image:
                                                                                                                              name: is-en-compute-image@sha256
                                                                                                                              tag: sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                                              tag_metadata: b71-migration-b54
                                                                                                                            iis_services_image:
                                                                                                                              name: is-services-image@sha256
                                                                                                                              tag: sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                              tag_metadata: b71-migration-b54
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove", "path": "/spec/image_digests/catalog_api_image"},{ "op": "remove","path": "/spec/image_digests/catalog_api_aux_image"},{ "op": "remove","path": "/spec/image_digests/portal_job_manager_image"}]'
  2. Run the following command to remove image updates from the WKC custom resource:

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove","path": "/spec/image_digests/wkc_data_rules_image"},{ "op": "remove","path": "/spec/image_digests/wdp_profiling_image"},{ "op": "remove","path": "/spec/image_digests/wkc_mde_service_manager_image"}]'
  3. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
  4. Wait for the WKC operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the wkc-data-rules, wdp-profiling, and wkc-mde-service-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.7.3/4.7.4 clusters, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
Cloud Pak for Data 4.6.x and 4.5.x patches (upgrades to Cloud Pak for Data 4.7.3/4.7.4 version 6)
 
 
Patch nameLegacy migration toolkit and IIS patches
Released onApril 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.5.x
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.5.x
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.5.x or 4.6.x to 4.7.3/4.7.4
Install instructions
Download the patch here.  

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                            "auths": {
                                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                            id:password>"}
                                                                                                                            }
                                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c \
                                                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7 \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:67e035d353eb2cb1ed37dd2634fe8ea06f74ba7a5675c2f17dd9a6a690096edd \
                                                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:67e035d353eb2cb1ed37dd2634fe8ea06f74ba7a5675c2f17dd9a6a690096edd
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49 \
                                                                                                                                docker://<local private registry>/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.
 
To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.5.x or 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2","tag_metadata":"b71-migration-b54"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551","tag_metadata":"b71-migration-b54"},"iis_services_image":{"name":"is-services-image@sha256","tag":"a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59","tag_metadata":"b71-migration-b54"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.5.x or 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.5.x or 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.7.3/4.7.4 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.7.3/4.7.4
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.7.3/4.7.4, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:3ade0c5babbb3fd58d60797540be53a34117d6c80c8fe7739ee9247e0f3f700c","catalog_api_aux_image":"sha256:d23d33a6b38ecc27f24302ebf6639240474e2e29cf33d9bbe6fdd645e83fdbc7","catalog_api_image":"sha256:59691feeedd54d5e0898f00a15fde65576aa578b0b53de400ff7969ea7560fce"}}}'
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:67e035d353eb2cb1ed37dd2634fe8ea06f74ba7a5675c2f17dd9a6a690096edd","wdp_profiling_image":"sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49","wkc_mde_service_manager_image":"sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4","wkc_data_rules_image":"sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8"}}}'
                                                                                                                            
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration, wdp-profiling, wkc-mde-service-manager, and wkc-data-rules pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Applying a hotfix

Applying the ZEN hotfix  

Note: Applying this hotfix is only meant for users upgrading to Version 4.7.3/4.7.4.
 
The following section details how to apply the RSI patch needed later on in the migration toolkit and patch process.
 
Applying the ZEN hotfix with the online IBM registry

For clusters configured to use the online IBM production registry, follow the steps below:
 
  1. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
                                                                                                                            
  2. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
                                                                                                                            
  3. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Applying the ZEN hotfix with a private container registry
 
For airgap install clusters which need to download the amd64 image to the local private registry, follow the steps below:
 
  1. Export the following variables after setting the correct value that contains credentials to access icr.io and your local private registry. For example:

    export IBM_ENTITLEMENT_KEY=<IBM Entitlement API Key>
                                                                                                                            export PRIVATE_REGISTRY_LOCATION=<Local private registry hostname>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_USER=<Private Registry login username>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_PASSWORD=<Private Registry login password>
  2. Using the exported environment variables created above, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo login cp.icr.io -u cp -p ${IBM_ENTITLEMENT_KEY}
                                                                                                                            skopeo login ${PRIVATE_REGISTRY_LOCATION} -u ${PRIVATE_REGISTRY_PUSH_USER} -p ${PRIVATE_REGISTRY_PUSH_PASSWORD}
    skopeo copy --all --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                        docker://icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50 \
                                                                                                                                     ${PRIVATE_REGISTRY_LOCATION}/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50
  3. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
  4. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
  5. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch forscaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Apply a router issue fix 
    1. Apply the router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                              kind: ZenExtension
                                                                                                                                              metadata:
                                                                                                                                                labels:
                                                                                                                                                  app: wkc-lite
                                                                                                                                                  app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                  app.kubernetes.io/managed-by: Tiller
                                                                                                                                                  app.kubernetes.io/name: wkc-lite
                                                                                                                                                  chart: wkc-lite
                                                                                                                                                  helm.sh/chart: wkc-lite
                                                                                                                                                  heritage: Tiller
                                                                                                                                                  release: 0075-wkc-lite
                                                                                                                                                name: wkc-routes-5588
                                                                                                                                                namespace: $WKC_NAMESPACE
                                                                                                                                              spec:
                                                                                                                                                extensions: |
                                                                                                                                                  [
                                                                                                                                                    {
                                                                                                                                                        "extension_point_id": "zen_front_door",
                                                                                                                                                        "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                        "details": {
                                                                                                                                                          "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                        }
                                                                                                                                                    }
                                                                                                                                                  ]
                                                                                                                                                wkc-routes-extn.conf: |-
                                                                                                                                                  set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                  location /metadata_enrichment/v3 {
                                                                                                                                                    proxy_set_header Host $host;
                                                                                                                                                    proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                    proxy_ssl_verify       on;
                                                                                                                                                    proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                    proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                    proxy_ssl_server_name  on;
                                                                                                                                                    proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                  }
                                                                                                                                              
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                          {
                                                                                                                                                              "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                              "name": "rsi-pjm-scaling",
                                                                                                                                                              "namespace": "wkc",
                                                                                                                                                              "patch_info": [
                                                                                                                                                                  {
                                                                                                                                                                      "description": "This",
                                                                                                                                                                      "details": {
                                                                                                                                                                          "patch_spec": [
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                  "value": "2"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              }
                                                                                                                                                                          ],
                                                                                                                                                                          "pod_selector": {
                                                                                                                                                                              "selector": {
                                                                                                                                                                                  "app": "portal-job-manager"
                                                                                                                                                                              }
                                                                                                                                                                          },
                                                                                                                                                                          "state": "active",
                                                                                                                                                                          "type": "json"
                                                                                                                                                                      },
                                                                                                                                                                      "display_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                      "meta": {}
                                                                                                                                                                  }
                                                                                                                                                              ]
                                                                                                                                                          }
                                                                                                                                                      ]
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                                        --state=active \
                                                                                                                                                        --spec_format=json \
                                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                    
  4. Apply a router issue fix 
    1. Apply a router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                                      kind: ZenExtension
                                                                                                                                                      metadata:
                                                                                                                                                        labels:
                                                                                                                                                          app: wkc-lite
                                                                                                                                                          app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                          app.kubernetes.io/managed-by: Tiller
                                                                                                                                                          app.kubernetes.io/name: wkc-lite
                                                                                                                                                          chart: wkc-lite
                                                                                                                                                          helm.sh/chart: wkc-lite
                                                                                                                                                          heritage: Tiller
                                                                                                                                                          release: 0075-wkc-lite
                                                                                                                                                        name: wkc-routes-5588
                                                                                                                                                        namespace: $WKC_NAMESPACE
                                                                                                                                                      spec:
                                                                                                                                                        extensions: |
                                                                                                                                                          [
                                                                                                                                                            {
                                                                                                                                                                "extension_point_id": "zen_front_door",
                                                                                                                                                                "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                                "details": {
                                                                                                                                                                  "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                                }
                                                                                                                                                            }
                                                                                                                                                          ]
                                                                                                                                                        wkc-routes-extn.conf: |-
                                                                                                                                                          set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                          location /metadata_enrichment/v3 {
                                                                                                                                                            proxy_set_header Host $host;
                                                                                                                                                            proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                            proxy_ssl_verify       on;
                                                                                                                                                            proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                            proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                            proxy_ssl_server_name  on;
                                                                                                                                                            proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                          }
                                                                                                                                                      
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                                  {
                                                                                                                                                                      "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                                      "name": "rsi-pjm-scaling",
                                                                                                                                                                      "namespace": "wkc",
                                                                                                                                                                      "patch_info": [
                                                                                                                                                                          {
                                                                                                                                                                              "description": "This",
                                                                                                                                                                              "details": {
                                                                                                                                                                                  "patch_spec": [
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                          "value": "2"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                          "value": "8Gi"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                          "value": "12Gi"
                                                                                                                                                                                      }
                                                                                                                                                                                  ],
                                                                                                                                                                                  "pod_selector": {
                                                                                                                                                                                      "selector": {
                                                                                                                                                                                          "app": "portal-job-manager"
                                                                                                                                                                                      }
                                                                                                                                                                                  },
                                                                                                                                                                                  "state": "active",
                                                                                                                                                                                  "type": "json"
                                                                                                                                                                              },
                                                                                                                                                                              "display_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                              "meta": {}
                                                                                                                                                                          }
                                                                                                                                                                      ]
                                                                                                                                                                  }
                                                                                                                                                              ]

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                    cpu:                2
                                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                                  Requests:
                                                                                                                                                                                    cpu:                30m
                                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                                    memory:             128Mi
                                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                                        couchdb_search_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "2"
                                                                                                                                                            memory: 3Gi
                                                                                                                                                          requests:
                                                                                                                                                            cpu: 250m
                                                                                                                                                            memory: 256Mi
                                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "4"
                                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                          limits:
                                                                                                                                                            ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.5 or 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.7 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 6.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                                              name: is-engine-image@sha256
                                                                                                                              tag: sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                              tag_metadata: b71-migration-b54
                                                                                                                            iis_en_compute_image:
                                                                                                                              name: is-en-compute-image@sha256
                                                                                                                              tag: sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                                              tag_metadata: b71-migration-b54
                                                                                                                            iis_services_image:
                                                                                                                              name: is-services-image@sha256
                                                                                                                              tag: sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                              tag_metadata: b71-migration-b54
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove", "path": "/spec/image_digests/catalog_api_image"},{ "op": "remove","path": "/spec/image_digests/catalog_api_aux_image"},{ "op": "remove","path": "/spec/image_digests/portal_job_manager_image"}]'
  2. Run the following command to remove image updates from the WKC custom resource:

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove","path": "/spec/image_digests/wkc_data_rules_image"},{ "op": "remove","path": "/spec/image_digests/wdp_profiling_image"},{ "op": "remove","path": "/spec/image_digests/wkc_mde_service_manager_image"}]'
  3. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
  4. Wait for the WKC operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the wkc-data-rules, wdp-profiling, and wkc-mde-service-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.7.3/4.7.4 clusters, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
Cloud Pak for Data 4.6.x and 4.5.x patches (upgrades to Cloud Pak for Data 4.7.3/4.7.4 version 5)
 
 
Patch nameLegacy migration toolkit and IIS patches
Released onApril 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.5.x
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.5.x
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.5.x or 4.6.x to 4.7.3/4.7.4
Install instructions
Download the patch here.  

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                            "auths": {
                                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                            id:password>"}
                                                                                                                            }
                                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:73cd4ea8978517734417213f8ce8e28ff57c6109ba3f48225392f05a5a21c70b \
                                                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:73cd4ea8978517734417213f8ce8e28ff57c6109ba3f48225392f05a5a21c70b
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:91d4456b986f8fded18a4fa91c7f1b2727fed9b37028e862e577e11c3f163f05 \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:91d4456b986f8fded18a4fa91c7f1b2727fed9b37028e862e577e11c3f163f05
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:d59caf7a2f67ff0c21d966e1503c7d1d3b2fb93aedde091dc4c7c30b67395d14 \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:d59caf7a2f67ff0c21d966e1503c7d1d3b2fb93aedde091dc4c7c30b67395d14
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:a37a79c72697123d84ae5cdc0458d0a23578fa0ec6f7f95d081d61597c6baa4c \
                                                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:a37a79c72697123d84ae5cdc0458d0a23578fa0ec6f7f95d081d61597c6baa4c
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49 \
                                                                                                                                docker://<local private registry>/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8
                                                                                                                            
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.
 
To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.5.x or 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2","tag_metadata":"b71-migration-b54"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551","tag_metadata":"b71-migration-b54"},"iis_services_image":{"name":"is-services-image@sha256","tag":"a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59","tag_metadata":"b71-migration-b54"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.5.x or 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.5.x or 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.7.3/4.7.4 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.7.3/4.7.4
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.7.3/4.7.4, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"portal_job_manager_image":"sha256:73cd4ea8978517734417213f8ce8e28ff57c6109ba3f48225392f05a5a21c70b","catalog_api_aux_image":"sha256:91d4456b986f8fded18a4fa91c7f1b2727fed9b37028e862e577e11c3f163f05","catalog_api_image":"sha256:d59caf7a2f67ff0c21d966e1503c7d1d3b2fb93aedde091dc4c7c30b67395d14"}}}'
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:a37a79c72697123d84ae5cdc0458d0a23578fa0ec6f7f95d081d61597c6baa4c","wdp_profiling_image":"sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49","wkc_mde_service_manager_image":"sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4","wkc_data_rules_image":"sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8"}}}'
                                                                                                                            
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration, wdp-profiling, wkc-mde-service-manager, and wkc-data-rules pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Applying a hotfix

Applying the ZEN hotfix  

Note: Applying this hotfix is only meant for users upgrading to Version 4.7.3/4.7.4.
 
The following section details how to apply the RSI patch needed later on in the migration toolkit and patch process.
 
Applying the ZEN hotfix with the online IBM registry

For clusters configured to use the online IBM production registry, follow the steps below:
 
  1. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
                                                                                                                            
  2. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
                                                                                                                            
  3. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Applying the ZEN hotfix with a private container registry
 
For airgap install clusters which need to download the amd64 image to the local private registry, follow the steps below:
 
  1. Export the following variables after setting the correct value that contains credentials to access icr.io and your local private registry. For example:

    export IBM_ENTITLEMENT_KEY=<IBM Entitlement API Key>
                                                                                                                            export PRIVATE_REGISTRY_LOCATION=<Local private registry hostname>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_USER=<Private Registry login username>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_PASSWORD=<Private Registry login password>
  2. Using the exported environment variables created above, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo login cp.icr.io -u cp -p ${IBM_ENTITLEMENT_KEY}
                                                                                                                            skopeo login ${PRIVATE_REGISTRY_LOCATION} -u ${PRIVATE_REGISTRY_PUSH_USER} -p ${PRIVATE_REGISTRY_PUSH_PASSWORD}
    skopeo copy --all --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                        docker://icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50 \
                                                                                                                                     ${PRIVATE_REGISTRY_LOCATION}/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50
  3. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
  4. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
  5. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch forscaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Apply a router issue fix 
    1. Apply the router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                              kind: ZenExtension
                                                                                                                                              metadata:
                                                                                                                                                labels:
                                                                                                                                                  app: wkc-lite
                                                                                                                                                  app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                  app.kubernetes.io/managed-by: Tiller
                                                                                                                                                  app.kubernetes.io/name: wkc-lite
                                                                                                                                                  chart: wkc-lite
                                                                                                                                                  helm.sh/chart: wkc-lite
                                                                                                                                                  heritage: Tiller
                                                                                                                                                  release: 0075-wkc-lite
                                                                                                                                                name: wkc-routes-5588
                                                                                                                                                namespace: $WKC_NAMESPACE
                                                                                                                                              spec:
                                                                                                                                                extensions: |
                                                                                                                                                  [
                                                                                                                                                    {
                                                                                                                                                        "extension_point_id": "zen_front_door",
                                                                                                                                                        "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                        "details": {
                                                                                                                                                          "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                        }
                                                                                                                                                    }
                                                                                                                                                  ]
                                                                                                                                                wkc-routes-extn.conf: |-
                                                                                                                                                  set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                  location /metadata_enrichment/v3 {
                                                                                                                                                    proxy_set_header Host $host;
                                                                                                                                                    proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                    proxy_ssl_verify       on;
                                                                                                                                                    proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                    proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                    proxy_ssl_server_name  on;
                                                                                                                                                    proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                  }
                                                                                                                                              
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                          {
                                                                                                                                                              "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                              "name": "rsi-pjm-scaling",
                                                                                                                                                              "namespace": "wkc",
                                                                                                                                                              "patch_info": [
                                                                                                                                                                  {
                                                                                                                                                                      "description": "This",
                                                                                                                                                                      "details": {
                                                                                                                                                                          "patch_spec": [
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                  "value": "2"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              }
                                                                                                                                                                          ],
                                                                                                                                                                          "pod_selector": {
                                                                                                                                                                              "selector": {
                                                                                                                                                                                  "app": "portal-job-manager"
                                                                                                                                                                              }
                                                                                                                                                                          },
                                                                                                                                                                          "state": "active",
                                                                                                                                                                          "type": "json"
                                                                                                                                                                      },
                                                                                                                                                                      "display_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                      "meta": {}
                                                                                                                                                                  }
                                                                                                                                                              ]
                                                                                                                                                          }
                                                                                                                                                      ]
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                                        --state=active \
                                                                                                                                                        --spec_format=json \
                                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                    
  4. Apply a router issue fix 
    1. Apply a router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                                      kind: ZenExtension
                                                                                                                                                      metadata:
                                                                                                                                                        labels:
                                                                                                                                                          app: wkc-lite
                                                                                                                                                          app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                          app.kubernetes.io/managed-by: Tiller
                                                                                                                                                          app.kubernetes.io/name: wkc-lite
                                                                                                                                                          chart: wkc-lite
                                                                                                                                                          helm.sh/chart: wkc-lite
                                                                                                                                                          heritage: Tiller
                                                                                                                                                          release: 0075-wkc-lite
                                                                                                                                                        name: wkc-routes-5588
                                                                                                                                                        namespace: $WKC_NAMESPACE
                                                                                                                                                      spec:
                                                                                                                                                        extensions: |
                                                                                                                                                          [
                                                                                                                                                            {
                                                                                                                                                                "extension_point_id": "zen_front_door",
                                                                                                                                                                "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                                "details": {
                                                                                                                                                                  "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                                }
                                                                                                                                                            }
                                                                                                                                                          ]
                                                                                                                                                        wkc-routes-extn.conf: |-
                                                                                                                                                          set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                          location /metadata_enrichment/v3 {
                                                                                                                                                            proxy_set_header Host $host;
                                                                                                                                                            proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                            proxy_ssl_verify       on;
                                                                                                                                                            proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                            proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                            proxy_ssl_server_name  on;
                                                                                                                                                            proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                          }
                                                                                                                                                      
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                                  {
                                                                                                                                                                      "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                                      "name": "rsi-pjm-scaling",
                                                                                                                                                                      "namespace": "wkc",
                                                                                                                                                                      "patch_info": [
                                                                                                                                                                          {
                                                                                                                                                                              "description": "This",
                                                                                                                                                                              "details": {
                                                                                                                                                                                  "patch_spec": [
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                          "value": "2"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                          "value": "8Gi"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                          "value": "12Gi"
                                                                                                                                                                                      }
                                                                                                                                                                                  ],
                                                                                                                                                                                  "pod_selector": {
                                                                                                                                                                                      "selector": {
                                                                                                                                                                                          "app": "portal-job-manager"
                                                                                                                                                                                      }
                                                                                                                                                                                  },
                                                                                                                                                                                  "state": "active",
                                                                                                                                                                                  "type": "json"
                                                                                                                                                                              },
                                                                                                                                                                              "display_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                              "meta": {}
                                                                                                                                                                          }
                                                                                                                                                                      ]
                                                                                                                                                                  }
                                                                                                                                                              ]

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                    cpu:                2
                                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                                  Requests:
                                                                                                                                                                                    cpu:                30m
                                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                                    memory:             128Mi
                                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                                        couchdb_search_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "2"
                                                                                                                                                            memory: 3Gi
                                                                                                                                                          requests:
                                                                                                                                                            cpu: 250m
                                                                                                                                                            memory: 256Mi
                                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "4"
                                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                          limits:
                                                                                                                                                            ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.5 or 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.7 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 6.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                                              name: is-engine-image@sha256
                                                                                                                              tag: sha256:4d4e09a644248d6e274432e32fe58a37641eae7fde9f0111ffbb468fc9b5d4f2
                                                                                                                              tag_metadata: b71-migration-b54
                                                                                                                            iis_en_compute_image:
                                                                                                                              name: is-en-compute-image@sha256
                                                                                                                              tag: sha256:983aab55b8b97803f8d24617714fb91cdc28104cac8fbd36d16d3dac9fd42551
                                                                                                                              tag_metadata: b71-migration-b54
                                                                                                                            iis_services_image:
                                                                                                                              name: is-services-image@sha256
                                                                                                                              tag: sha256:a6b552248e7eb7f5e75199ffd9b1b1084089d39b3397ee484fcf9cd61ae5ad59
                                                                                                                              tag_metadata: b71-migration-b54
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove", "path": "/spec/image_digests/catalog_api_image"},{ "op": "remove","path": "/spec/image_digests/catalog_api_aux_image"},{ "op": "remove","path": "/spec/image_digests/portal_job_manager_image"}]'
  2. Run the following command to remove image updates from the WKC custom resource:

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove","path": "/spec/image_digests/wkc_data_rules_image"},{ "op": "remove","path": "/spec/image_digests/wdp_profiling_image"},{ "op": "remove","path": "/spec/image_digests/wkc_mde_service_manager_image"}]'
  3. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
  4. Wait for the WKC operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the wkc-data-rules, wdp-profiling, and wkc-mde-service-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.7.3/4.7.4 clusters, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
>Cloud Pak for Data 4.6.x and 4.5.x patches (upgrades to Cloud Pak for Data 4.7.3/4.7.4 version 4)
 
 
Patch nameLegacy migration toolkit and IIS patches
Released onFebruary 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.5.x
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.5.x
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.5.x or 4.6.x to 4.7.3/4.7.4
Install instructions
Download the patch here.  

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                            "auths": {
                                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                            id:password>"}
                                                                                                                            }
                                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:0438b8ea0758899bed2b967525d21f6346b089cb32aeab013e72693e2552382b \
                                                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:0438b8ea0758899bed2b967525d21f6346b089cb32aeab013e72693e2552382b
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:b313225799d479d3614ce34cb4ba37fe6205a37846a2144d69fe972b4da73e43 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:b313225799d479d3614ce34cb4ba37fe6205a37846a2144d69fe972b4da73e43
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:9be11faef6f0c2f1b9cdc9cefd2809b088e3b5ec0c014f9f3dd446ff126bd0f3 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:9be11faef6f0c2f1b9cdc9cefd2809b088e3b5ec0c014f9f3dd446ff126bd0f3
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:f74740effa4b90f9990470372fb9462482ec10d2000c4679a0e2ad17c382880c \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:f74740effa4b90f9990470372fb9462482ec10d2000c4679a0e2ad17c382880c
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:564b7bba26d3272924b698207c4101856957251fe84cf94845c41d8211ee1612 \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:564b7bba26d3272924b698207c4101856957251fe84cf94845c41d8211ee1612
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:0e1a1520713012f668256802475cbed2020b42ba4946a4715b068d3c8b6f6479 \
                                                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:0e1a1520713012f668256802475cbed2020b42ba4946a4715b068d3c8b6f6479
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:007ef84d3f24c3eec1a01d34a41ad8193db89062d8d3785a4b2074183c983124 \
                                                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:007ef84d3f24c3eec1a01d34a41ad8193db89062d8d3785a4b2074183c983124
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49 \
                                                                                                                                docker://<local private registry>/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8
                                                                                                                            
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.
 
To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.5.x or 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"0438b8ea0758899bed2b967525d21f6346b089cb32aeab013e72693e2552382b","tag_metadata":"b68-migration-b52"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"b313225799d479d3614ce34cb4ba37fe6205a37846a2144d69fe972b4da73e43","tag_metadata":"b68-migration-b52"},"iis_services_image":{"name":"is-services-image@sha256","tag":"9be11faef6f0c2f1b9cdc9cefd2809b088e3b5ec0c014f9f3dd446ff126bd0f3","tag_metadata":"b68-migration-b52"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:9be11faef6f0c2f1b9cdc9cefd2809b088e3b5ec0c014f9f3dd446ff126bd0f3
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:0438b8ea0758899bed2b967525d21f6346b089cb32aeab013e72693e2552382b
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:b313225799d479d3614ce34cb4ba37fe6205a37846a2144d69fe972b4da73e43
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.5.x or 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.5.x or 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.7.3/4.7.4 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.7.3/4.7.4
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.7.3/4.7.4, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"catalog_api_image":"sha256:f74740effa4b90f9990470372fb9462482ec10d2000c4679a0e2ad17c382880c","catalog_api_aux_image":"sha256:564b7bba26d3272924b698207c4101856957251fe84cf94845c41d8211ee1612","portal_job_manager_image":"sha256:0e1a1520713012f668256802475cbed2020b42ba4946a4715b068d3c8b6f6479"}}}'
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:007ef84d3f24c3eec1a01d34a41ad8193db89062d8d3785a4b2074183c983124","wdp_profiling_image":"sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49","wkc_mde_service_manager_image":"sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4","wkc_data_rules_image":"sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8"}}}'
                                                                                                                            
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration, wdp-profiling, wkc-mde-service-manager, and wkc-data-rules pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Applying a hotfix

Applying the ZEN hotfix  

Note: Applying this hotfix is only meant for users upgrading to Version 4.7.3/4.7.4.
 
The following section details how to apply the RSI patch needed later on in the migration toolkit and patch process.
 
Applying the ZEN hotfix with the online IBM registry

For clusters configured to use the online IBM production registry, follow the steps below:
 
  1. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
                                                                                                                            
  2. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
                                                                                                                            
  3. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Applying the ZEN hotfix with a private container registry
 
For airgap install clusters which need to download the amd64 image to the local private registry, follow the steps below:
 
  1. Export the following variables after setting the correct value that contains credentials to access icr.io and your local private registry. For example:

    export IBM_ENTITLEMENT_KEY=<IBM Entitlement API Key>
                                                                                                                            export PRIVATE_REGISTRY_LOCATION=<Local private registry hostname>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_USER=<Private Registry login username>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_PASSWORD=<Private Registry login password>
  2. Using the exported environment variables created above, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo login cp.icr.io -u cp -p ${IBM_ENTITLEMENT_KEY}
                                                                                                                            skopeo login ${PRIVATE_REGISTRY_LOCATION} -u ${PRIVATE_REGISTRY_PUSH_USER} -p ${PRIVATE_REGISTRY_PUSH_PASSWORD}
    skopeo copy --all --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                        docker://icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50 \
                                                                                                                                     ${PRIVATE_REGISTRY_LOCATION}/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50
  3. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
  4. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
  5. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch forscaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Apply a router issue fix 
    1. Apply the router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                              kind: ZenExtension
                                                                                                                                              metadata:
                                                                                                                                                labels:
                                                                                                                                                  app: wkc-lite
                                                                                                                                                  app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                  app.kubernetes.io/managed-by: Tiller
                                                                                                                                                  app.kubernetes.io/name: wkc-lite
                                                                                                                                                  chart: wkc-lite
                                                                                                                                                  helm.sh/chart: wkc-lite
                                                                                                                                                  heritage: Tiller
                                                                                                                                                  release: 0075-wkc-lite
                                                                                                                                                name: wkc-routes-5588
                                                                                                                                                namespace: $WKC_NAMESPACE
                                                                                                                                              spec:
                                                                                                                                                extensions: |
                                                                                                                                                  [
                                                                                                                                                    {
                                                                                                                                                        "extension_point_id": "zen_front_door",
                                                                                                                                                        "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                        "details": {
                                                                                                                                                          "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                        }
                                                                                                                                                    }
                                                                                                                                                  ]
                                                                                                                                                wkc-routes-extn.conf: |-
                                                                                                                                                  set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                  location /metadata_enrichment/v3 {
                                                                                                                                                    proxy_set_header Host $host;
                                                                                                                                                    proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                    proxy_ssl_verify       on;
                                                                                                                                                    proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                    proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                    proxy_ssl_server_name  on;
                                                                                                                                                    proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                  }
                                                                                                                                              
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                          {
                                                                                                                                                              "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                              "name": "rsi-pjm-scaling",
                                                                                                                                                              "namespace": "wkc",
                                                                                                                                                              "patch_info": [
                                                                                                                                                                  {
                                                                                                                                                                      "description": "This",
                                                                                                                                                                      "details": {
                                                                                                                                                                          "patch_spec": [
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                  "value": "2"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              }
                                                                                                                                                                          ],
                                                                                                                                                                          "pod_selector": {
                                                                                                                                                                              "selector": {
                                                                                                                                                                                  "app": "portal-job-manager"
                                                                                                                                                                              }
                                                                                                                                                                          },
                                                                                                                                                                          "state": "active",
                                                                                                                                                                          "type": "json"
                                                                                                                                                                      },
                                                                                                                                                                      "display_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                      "meta": {}
                                                                                                                                                                  }
                                                                                                                                                              ]
                                                                                                                                                          }
                                                                                                                                                      ]
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                                        --state=active \
                                                                                                                                                        --spec_format=json \
                                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                    
  4. Apply a router issue fix 
    1. Apply a router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                                      kind: ZenExtension
                                                                                                                                                      metadata:
                                                                                                                                                        labels:
                                                                                                                                                          app: wkc-lite
                                                                                                                                                          app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                          app.kubernetes.io/managed-by: Tiller
                                                                                                                                                          app.kubernetes.io/name: wkc-lite
                                                                                                                                                          chart: wkc-lite
                                                                                                                                                          helm.sh/chart: wkc-lite
                                                                                                                                                          heritage: Tiller
                                                                                                                                                          release: 0075-wkc-lite
                                                                                                                                                        name: wkc-routes-5588
                                                                                                                                                        namespace: $WKC_NAMESPACE
                                                                                                                                                      spec:
                                                                                                                                                        extensions: |
                                                                                                                                                          [
                                                                                                                                                            {
                                                                                                                                                                "extension_point_id": "zen_front_door",
                                                                                                                                                                "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                                "details": {
                                                                                                                                                                  "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                                }
                                                                                                                                                            }
                                                                                                                                                          ]
                                                                                                                                                        wkc-routes-extn.conf: |-
                                                                                                                                                          set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                          location /metadata_enrichment/v3 {
                                                                                                                                                            proxy_set_header Host $host;
                                                                                                                                                            proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                            proxy_ssl_verify       on;
                                                                                                                                                            proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                            proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                            proxy_ssl_server_name  on;
                                                                                                                                                            proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                          }
                                                                                                                                                      
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                                  {
                                                                                                                                                                      "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                                      "name": "rsi-pjm-scaling",
                                                                                                                                                                      "namespace": "wkc",
                                                                                                                                                                      "patch_info": [
                                                                                                                                                                          {
                                                                                                                                                                              "description": "This",
                                                                                                                                                                              "details": {
                                                                                                                                                                                  "patch_spec": [
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                          "value": "2"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                          "value": "8Gi"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                          "value": "12Gi"
                                                                                                                                                                                      }
                                                                                                                                                                                  ],
                                                                                                                                                                                  "pod_selector": {
                                                                                                                                                                                      "selector": {
                                                                                                                                                                                          "app": "portal-job-manager"
                                                                                                                                                                                      }
                                                                                                                                                                                  },
                                                                                                                                                                                  "state": "active",
                                                                                                                                                                                  "type": "json"
                                                                                                                                                                              },
                                                                                                                                                                              "display_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                              "meta": {}
                                                                                                                                                                          }
                                                                                                                                                                      ]
                                                                                                                                                                  }
                                                                                                                                                              ]

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                    cpu:                2
                                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                                  Requests:
                                                                                                                                                                                    cpu:                30m
                                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                                    memory:             128Mi
                                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                                        couchdb_search_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "2"
                                                                                                                                                            memory: 3Gi
                                                                                                                                                          requests:
                                                                                                                                                            cpu: 250m
                                                                                                                                                            memory: 256Mi
                                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "4"
                                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                          limits:
                                                                                                                                                            ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.5 or 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.7 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 6.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                                              name: is-engine-image@sha256
                                                                                                                              tag: sha256:0438b8ea0758899bed2b967525d21f6346b089cb32aeab013e72693e2552382b
                                                                                                                              tag_metadata: b68-migration-b52
                                                                                                                            iis_en_compute_image:
                                                                                                                              name: is-en-compute-image@sha256
                                                                                                                              tag: sha256:b313225799d479d3614ce34cb4ba37fe6205a37846a2144d69fe972b4da73e43
                                                                                                                              tag_metadata: b68-migration-b52
                                                                                                                            iis_services_image:
                                                                                                                              name: is-services-image@sha256
                                                                                                                              tag: sha256:9be11faef6f0c2f1b9cdc9cefd2809b088e3b5ec0c014f9f3dd446ff126bd0f3
                                                                                                                              tag_metadata: b68-migration-b52
                                                                                                                            
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove", "path": "/spec/image_digests/catalog_api_image"},{ "op": "remove","path": "/spec/image_digests/catalog_api_aux_image"},{ "op": "remove","path": "/spec/image_digests/portal_job_manager_image"}]'
  2. Run the following command to remove image updates from the WKC custom resource:

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove","path": "/spec/image_digests/wkc_data_rules_image"},{ "op": "remove","path": "/spec/image_digests/wdp_profiling_image"},{ "op": "remove","path": "/spec/image_digests/wkc_mde_service_manager_image"}]'
  3. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
  4. Wait for the WKC operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the wkc-data-rules, wdp-profiling, and wkc-mde-service-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.7.3/4.7.4 clusters, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 
 
Cloud Pak for Data 4.6.x and 4.5.x patches (upgrades to Cloud Pak for Data 4.7.3/4.7.4 version 3)
 
 
Patch nameLegacy migration toolkit and IIS patches
Released onJanuary 2024
Service assemblywkc
Applies to service version
Watson Knowledge Catalog 4.5.x
Watson Knowledge Catalog 4.6.x
Applies to platform version
Cloud Pak for Data 4.5.x
Cloud Pak for Data 4.6.x
Description
This patch provides the migration toolkit and the patches needed for migrating your legacy features from 4.5.x or 4.6.x to 4.7.3/4.7.4
Install instructions
Download the patch here.  

Download the legacy migration and IIS patch images from the IBM entitled registry as follows.
 
Downloading the legacy migration and IIS patch images in an air-gapped environment
 
The following steps are meant for applying the patches in an air-gapped environment. To install the patch using the online IBM entitled registry, see the section Applying the legacy migration and IIS patch images using the online IBM registry.
  1. Log in to the OpenShift console as the cluster admin.
  2. Prepare the authentication credentials to access the IBM production repository. Use the same auth.json file used for CASE download and image mirroring. An example directory path:

    ${HOME}/.airgap/auth.json
  3. You can also create an auth.json file that contains credentials to access icr.io and your local private registry. For example:

    {
                                                                                                                            "auths": {
                                                                                                                            "cp.icr.io":{"email":"unused","auth":"<base64 encoded id:apikey>"},
                                                                                                                            "<private registry hostname>":{"email":"unused","auth":"<base64 encoded
                                                                                                                            id:password>"}
                                                                                                                            }
                                                                                                                            }

    For more information about the auth.json file, see containers-auth.json - syntax for the registry authentication file.

  4. Install skopeo by running:

    yum install skopeo
  5. To confirm the path for the local private registry to copy the hotfix images to, run the following command:

    oc describe pod <hotfix image pod> | grep -i "image:"


    Where <hotfix image pod> can be the pod name for any of the images which will be patched with this hotfix.  
    For example:

    oc describe pod iis-services-7855f7fd8f-lsvsj | grep Image:
                                                                                                                            Image: cp.icr.io/cp/cpd/is-services@sha256:03c88c69b986f24d39e4556731c0d171169d2bd91b0fb22f6367fd51c9020e64
  6. To get the local private registry source details, run the following commands:

    oc get imageContentSourcePolicy
                                                                                                                            oc describe imageContentSourcePolicy [cloud-pak-for-data-mirror]


    The local private registry mirror repository and path details should be in the output of the describe command:

    - mirrors:
                                                                                                                              - ${PRIVATE_REGISTRY_LOCATION}/cp/
                                                                                                                              source: cp.icr.io/cp/cpd


    For more information about mirroring of images, see Configuring your cluster to pull Cloud Pak for Data images.

  7. Use the skopeo command to copy the patch images from the IBM production registry to the local private registry. Using the appropriate auth.json file, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-engine-image@sha256:45968624ade4260b89873ef5f17af403c0351ea58937ad25e146a59711e2ed4e \
                                                                                                                                docker://<local private registry>/cp/cpd/is-engine-image@sha256:45968624ade4260b89873ef5f17af403c0351ea58937ad25e146a59711e2ed4e
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-en-compute-image@sha256:3310e7ae7802510122e2e09b79e1fb3cf843ef69d8ddebb97147d0b726ce088b \
                                                                                                                                docker://<local private registry>/cp/cpd/is-en-compute-image@sha256:3310e7ae7802510122e2e09b79e1fb3cf843ef69d8ddebb97147d0b726ce088b
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/is-services-image@sha256:714b59324b04a15e005a10e245743ad984782930beea1b772021bcb1c1631e60 \
                                                                                                                                docker://<local private registry>/cp/cpd/is-services-image@sha256:714b59324b04a15e005a10e245743ad984782930beea1b772021bcb1c1631e60
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog_master@sha256:e82c5823f552ab01ab4f9cce4b505235379bf94bb3a8349252cd6e98c874e05f \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog_master@sha256:e82c5823f552ab01ab4f9cce4b505235379bf94bb3a8349252cd6e98c874e05f
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/catalog-api-aux_master@sha256:4ca678cb931e360c0964c61e6052a497740cd1f61d846eb4f38f8ab3366181d8 \
                                                                                                                                docker://<local private registry>/cp/cpd/catalog-api-aux_master@sha256:4ca678cb931e360c0964c61e6052a497740cd1f61d846eb4f38f8ab3366181d8
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/portal-job-manager@sha256:cb17e0d39c29b93803c45cb5ce897dc1ba087ce70caaca5d777ca6916ce83daf \
                                                                                                                                docker://<local private registry>/cp/cpd/portal-job-manager@sha256:cb17e0d39c29b93803c45cb5ce897dc1ba087ce70caaca5d777ca6916ce83daf
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49 \
                                                                                                                                docker://<local private registry>/cp/cpd/wdp-profiling@sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-mde-service-manager@sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/legacy-migration@sha256:82a9e206703ccecbbb0ed40ebcd85c880c2f9e763b23c672802b21e40cd8d06b \
                                                                                                                                docker://<local private registry>/cp/cpd/legacy-migration@sha256:82a9e206703ccecbbb0ed40ebcd85c880c2f9e763b23c672802b21e40cd8d06b
                                                                                                                            skopeo copy --all --authfile "<folder path>/auth.json" \
                                                                                                                                --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                docker://cp.icr.io/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8 \
                                                                                                                                docker://<local private registry>/cp/cpd/wkc-data-rules@sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8
                                                                                                                            
To complete the installation, follow the steps in the next section.

Applying the legacy migration and IIS patch images using the online IBM registry
 
The following steps are meant for applying the patches using the online IBM entitled registry or after downloading the images for an air-gapped environment.
 
To patch the 4.5.x or 4.6.x cluster, proceed with the following commands. Note: ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. On the 4.5.x or 4.6.x cluster, run the following commands to apply the patch to the IIS custom resource (iis-cr):
    • To initially apply the patch to the IIS custom resource (iis-cr) on a 4.5.x or 4.6.x cluster:

      oc patch iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"iis_en_conductor_image":{"name":"is-engine-image@sha256","tag":"45968624ade4260b89873ef5f17af403c0351ea58937ad25e146a59711e2ed4e","tag_metadata":"b61-migration-b48"},"iis_en_compute_image":{"name":"is-en-compute-image@sha256","tag":"3310e7ae7802510122e2e09b79e1fb3cf843ef69d8ddebb97147d0b726ce088b","tag_metadata":"b61-migration-b48"},"iis_services_image":{"name":"is-services-image@sha256","tag":"714b59324b04a15e005a10e245743ad984782930beea1b772021bcb1c1631e60","tag_metadata":"b61-migration-b48"}}}'
    • To apply IIS services-related patches on 4.7.x or later, run the following commands:

      oc set image -n ${PROJECT_CPD_INST_OPERANDS} deployment/iis-services iis-servicesdocker-container=cp.icr.io/cp/cpd/is-services-image@sha256:714b59324b04a15e005a10e245743ad984782930beea1b772021bcb1c1631e60
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-en-conductor iis-en-conductor=cp.icr.io/cp/cpd/is-engine-image@sha256:45968624ade4260b89873ef5f17af403c0351ea58937ad25e146a59711e2ed4e
                                                                                                                                              oc set image -n ${PROJECT_CPD_INST_OPERANDS} sts/is-engine-compute iis-en-compute=cp.icr.io/cp/cpd/is-en-compute-image@sha256:3310e7ae7802510122e2e09b79e1fb3cf843ef69d8ddebb97147d0b726ce088b
       
  2. Run this command to resolve a known issue with the HOSTNAME parameter:

    oc patch sts is-en-conductor -p '{"spec": {"template": {"spec": {"containers": [{"name": "iis-en-conductor","env": [{"name": "HOSTNAME","value": "is-en-conductor-0"}]}]}}}}'
  3. Make sure that the is-en-conductor-0 pod is up and running, then run the following command to remove any unnecessary directories:

    oc rsh is-en-conductor-0 rm -rf /mnt/dedicated_vol/Engine/is-en-conductor-0.en-cond
  4. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  5. After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the updated images.
 
Installing the legacy migration toolkit  
 
To install the migration toolkit on the 4.5.x or 4.6.x cluster, proceed with the following steps:
  1. Log in to the OpenShift console as the cluster admin.
  2. Extract the .zip file you downloaded to the 4.5.x or 4.6.x cluster. The content is extracted to a folder named legacy-migration-patch.
    1. Go to the legacy-migration-patch folder.
    2. To make the install script executable, run the following command:

      chmod +x install-legacy-migration-config-spec.sh
    3. To install the toolkit, run the following command:

      ./install-legacy-migration-config-spec.sh -u <ocp-username> -p <ocp-password> --url <ocp-url> -n ${PROJECT_CPD_INST_OPERANDS}
Note: If you intend to run the export in the inspect mode or perform a test export, you should stop at this point. Only proceed to the section Upgrade to 4.7.3/4.7.4 after you have confirmed that your system is ready for migration.
 
Upgrade the cluster to 4.7.3/4.7.4
 
Note: For the IIS custom resource you should not revert the IIS images prior to performing the upgrade.
 
After upgrading to 4.7.3/4.7.4, proceed with the following commands.
  1. If doing an air-gapped upgrade install, ensure that the legacy_migration image is downloaded to the local registry. See one of the previous sections.
  2. Run the following command to apply the patch to the CCS custom resource (ccs-cr):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"catalog_api_image":"sha256:e82c5823f552ab01ab4f9cce4b505235379bf94bb3a8349252cd6e98c874e05f","catalog_api_aux_image":"sha256:4ca678cb931e360c0964c61e6052a497740cd1f61d846eb4f38f8ab3366181d8","portal_job_manager_image":"sha256:cb17e0d39c29b93803c45cb5ce897dc1ba087ce70caaca5d777ca6916ce83daf"}}}'
  3. Run the following command to apply the patch to the WKC custom resource (wkc-cr):

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --type=merge -p '{"spec":{"image_digests":{"wkc_legacy_migration_image":"sha256:82a9e206703ccecbbb0ed40ebcd85c880c2f9e763b23c672802b21e40cd8d06b","wdp_profiling_image":"sha256:a70ec9625447a3a39b6f5b945097bcb8b57cc76ed464ea45fc3f9b70f1b05a49","wkc_mde_service_manager_image":"sha256:da08c7a58f75c5653161fd2e75fa8f3efccc308e1cef87d0aa339439dc13b8b4","wkc_data_rules_image":"sha256:263f096deb1e8c6510f19a8f940b1410a8ba0666d749c268ce33aba753915ae8"}}}'
                                                                                                                            
  4. If the sum of the number of Data Quality projects on the source environment and the number of projects owned by the user running the import on the target IBM Knolwedge Catalog environment exceeds 200 projects,  
    run the following command to increase the limit (where <project_limit> is the new intended project limit):

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": <project_limit>}}'

    For example, to increase the limit to 400 projects, run the following:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --type merge --patch '{"spec":{"projects_created_per_user_limit": 400}}'
  5. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  6. Wait for the wkc operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the legacy-migration, wdp-profiling, wkc-mde-service-manager, and wkc-data-rules pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running.
  7. Restart the ngp-projects-api pods.  
    Get the names of the ngp-projects-api pods:

    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns=POD:.metadata.name | grep ngp-projects-api

    Then, for each of the ngp-projects-api pod names, run the following to restart the pod:

    oc delete pod ngp-projects-api-xxxxxx
Applying a hotfix

Applying the ZEN hotfix  

Note: Applying this hotfix is only meant for users upgrading to Version 4.7.3/4.7.4.
 
The following section details how to apply the RSI patch needed later on in the migration toolkit and patch process.
 
Applying the ZEN hotfix with the online IBM registry

For clusters configured to use the online IBM production registry, follow the steps below:
 
  1. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
                                                                                                                            
  2. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
                                                                                                                            
  3. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Applying the ZEN hotfix with a private container registry
 
For airgap install clusters which need to download the amd64 image to the local private registry, follow the steps below:
 
  1. Export the following variables after setting the correct value that contains credentials to access icr.io and your local private registry. For example:

    export IBM_ENTITLEMENT_KEY=<IBM Entitlement API Key>
                                                                                                                            export PRIVATE_REGISTRY_LOCATION=<Local private registry hostname>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_USER=<Private Registry login username>
                                                                                                                            export PRIVATE_REGISTRY_PUSH_PASSWORD=<Private Registry login password>
  2. Using the exported environment variables created above, copy the patch images from the IBM production registry to the Openshift cluster registry:

    skopeo login cp.icr.io -u cp -p ${IBM_ENTITLEMENT_KEY}
                                                                                                                            skopeo login ${PRIVATE_REGISTRY_LOCATION} -u ${PRIVATE_REGISTRY_PUSH_USER} -p ${PRIVATE_REGISTRY_PUSH_PASSWORD}
    skopeo copy --all --dest-tls-verify=false --src-tls-verify=false \
                                                                                                                                        docker://icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50 \
                                                                                                                                     ${PRIVATE_REGISTRY_LOCATION}/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50
  3. Export the PROJECT_CPD_INST_OPERATORS environment variable:

    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
  4. Export the ZEN_OPERATOR_HOTFIX_IMAGE_VALUE environment variable for the amd64 version of the hotfix image:

    export ZEN_OPERATOR_HOTFIX_IMAGE_VALUE="icr.io/cpopen/ibm-zen-operator@sha256:064019efd3f1fc8e7fc01bc50dc8e620f1903f0c58729b4e23d9e686e42eee50"
  5. Patch the hotfix image:

    oc patch csv ibm-zen-operator.v5.0.2 \
                                                                                                                            --namespace ${PROJECT_CPD_INST_OPERATORS} \
                                                                                                                            --type='json' \
                                                                                                                            --patch "[{'op': 'replace', 'path':'/spec/install/spec/deployments/0/spec/template/spec/containers/0/image', 'value': '$ZEN_OPERATOR_HOTFIX_IMAGE_VALUE'}]"
 
Configuration changes
Apply configuration changes for Migration
 
Before continuing with the migration process, you will need to tune your cluster based on the scaleconfig size.
 
Run the following commands to save your current cluster CCS and WKC settings:
oc get ccs ccs-cr -o yaml > ccs.bak.yaml
                                                                                                        oc get wkc wkc-cr -o yaml > wkc.bak.yaml
For small-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"8Gi"}]
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                              ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                              ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                --patch_type=rsi_pod_spec \
                                                                                                                                                --patch_name=pjm-scaling \
                                                                                                                                                --description="This is spec patch forscaling PJM" \
                                                                                                                                                --include_labels=app:portal-job-manager \
                                                                                                                                                --state=active \
                                                                                                                                                --spec_format=json \
                                                                                                                                                --patch_spec=/tmp/work/rsi/specpatch.json
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it does, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}}}}'
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
  4. Apply a router issue fix 
    1. Apply the router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                              kind: ZenExtension
                                                                                                                                              metadata:
                                                                                                                                                labels:
                                                                                                                                                  app: wkc-lite
                                                                                                                                                  app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                  app.kubernetes.io/managed-by: Tiller
                                                                                                                                                  app.kubernetes.io/name: wkc-lite
                                                                                                                                                  chart: wkc-lite
                                                                                                                                                  helm.sh/chart: wkc-lite
                                                                                                                                                  heritage: Tiller
                                                                                                                                                  release: 0075-wkc-lite
                                                                                                                                                name: wkc-routes-5588
                                                                                                                                                namespace: $WKC_NAMESPACE
                                                                                                                                              spec:
                                                                                                                                                extensions: |
                                                                                                                                                  [
                                                                                                                                                    {
                                                                                                                                                        "extension_point_id": "zen_front_door",
                                                                                                                                                        "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                        "details": {
                                                                                                                                                          "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                        }
                                                                                                                                                    }
                                                                                                                                                  ]
                                                                                                                                                wkc-routes-extn.conf: |-
                                                                                                                                                  set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                  location /metadata_enrichment/v3 {
                                                                                                                                                    proxy_set_header Host $host;
                                                                                                                                                    proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                    proxy_ssl_verify       on;
                                                                                                                                                    proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                    proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                    proxy_ssl_server_name  on;
                                                                                                                                                    proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                  }
                                                                                                                                              
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
       
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                          {
                                                                                                                                                              "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                              "name": "rsi-pjm-scaling",
                                                                                                                                                              "namespace": "wkc",
                                                                                                                                                              "patch_info": [
                                                                                                                                                                  {
                                                                                                                                                                      "description": "This",
                                                                                                                                                                      "details": {
                                                                                                                                                                          "patch_spec": [
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                  "value": "2"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              },
                                                                                                                                                                              {
                                                                                                                                                                                  "op": "replace",
                                                                                                                                                                                  "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                  "value": "8Gi"
                                                                                                                                                                              }
                                                                                                                                                                          ],
                                                                                                                                                                          "pod_selector": {
                                                                                                                                                                              "selector": {
                                                                                                                                                                                  "app": "portal-job-manager"
                                                                                                                                                                              }
                                                                                                                                                                          },
                                                                                                                                                                          "state": "active",
                                                                                                                                                                          "type": "json"
                                                                                                                                                                      },
                                                                                                                                                                      "display_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                      "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                      "meta": {}
                                                                                                                                                                  }
                                                                                                                                                              ]
                                                                                                                                                          }
                                                                                                                                                      ]
                                                                                                                                                      

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                            cpu:                2
                                                                                                                                                                            ephemeral-storage:  8Gi
                                                                                                                                                                            memory:             8Gi
                                                                                                                                                                          Requests:
                                                                                                                                                                            cpu:                30m
                                                                                                                                                                            ephemeral-storage:  10Mi
                                                                                                                                                                            memory:             128Mi
                                                                                                                                                                      
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                  -Dfeature.fetch_stale_data_from_couch_db=true
                                                                                                                                                catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                catalog_api_properties_global_call_logs: "false"
                                                                                                                                                couchdb_search_resources:
                                                                                                                                                  limits:
                                                                                                                                                    cpu: "2"
                                                                                                                                                    memory: 3Gi
                                                                                                                                                  requests:
                                                                                                                                                    cpu: 250m
                                                                                                                                                    memory: 256Mi
                                                                                                                                              
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                  limits:
                                                                                                                                                    ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
 
For medium-sized clusters
  1. Increase PJM resource limit through RSI patch from the previous section.
    1. Create a file named specpatch.json and save it under the cpd-cli-workspace/olm-utils-workspace/work/rsi/ directory, or $CPD_CLI_MANAGE_WORKSPACE/work/rsi directory, if you use a customized workspace directory with environment variable $CPD_CLI_MANAGE_WORKSPACE. Then, Create the olm-utils-workspace/work/rsi/ if this directory does not exist yet. Copy the following content into the specpatch.json file:

      [{"op":"replace","path":"/spec/containers/0/resources/limits/cpu","value":"2"},{"op":"replace","path":"/spec/containers/0/resources/limits/memory","value":"8Gi"},{"op":"replace","path":"/spec/containers/0/resources/limits/ephemeral-storage","value":"12Gi"}]
                                                                                                                                                      
    2. Enable the RSI patch:

      ./cpd-cli manage login-to-ocp --server=${OCP_URL} -u kubeadmin -p ${PASSWORD}
                                                                                                                                                      ./cpd-cli manage enable-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      ./cpd-cli manage install-rsi --cpd_instance_ns=${WKC_NAMESPACE}
                                                                                                                                                      
    3. Run the patch:

      ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE} \
                                                                                                                                                        --patch_type=rsi_pod_spec \
                                                                                                                                                        --patch_name=pjm-scaling \
                                                                                                                                                        --description="This is spec patch forscaling PJM" \
                                                                                                                                                        --include_labels=app:portal-job-manager \
                                                                                                                                                        --state=active \
                                                                                                                                                        --spec_format=json \
                                                                                                                                                        --patch_spec=/tmp/work/rsi/specpatch.json
                                                                                                                                                      
    4. The above call may fail if the olm-utils-play-v2 already exists before the call, which doesn't have the /tmp/work/rsi/specpatch.json file mounted. If it fails, you can stop the previous container by running:

      podman stop olm-utils-play-v2

      Then restart the process from step 1 again, to relogin and restart the olm-utils-play-v2 container.

  2. Patch the CCS CR to temporarily turn off some features and increase the resource limit during migration:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"catalog_api_jvm_args_extras": "-Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true  -Dfeature.fetch_stale_data_from_couch_db=true", "catalog_api_properties_enable_activity_tracker_publishing": "false", "catalog_api_properties_enable_global_search_publishing": "false", "catalog_api_properties_enable_global_search_rabbitmq_publishing": "false", "catalog_api_properties_global_call_logs": "false", "couchdb_search_resources":{"requests":{"cpu": "250m", "memory": "256Mi"},"limits":{"cpu": "2", "memory": "3Gi"}},"dap_base_asset_files_resources":{"limits":{"cpu": "4", "memory": "12Gi"}}}}'
                                                                                                                                    
  3. Patch the WKC CR to increase the resource limit for wkc_data_rules_resources:

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type merge --patch '{"spec": {"wkc_data_rules_resources":{"limits":{"ephemeral-storage": "2Gi"}}}}'
                                                                                                                                    
  4. Apply a router issue fix 
    1. Apply a router fix by creating a file named zenextension_wkc-routes-change.yaml:

      apiVersion: zen.cpd.ibm.com/v1
                                                                                                                                                      kind: ZenExtension
                                                                                                                                                      metadata:
                                                                                                                                                        labels:
                                                                                                                                                          app: wkc-lite
                                                                                                                                                          app.kubernetes.io/instance: 0075-wkc-lite
                                                                                                                                                          app.kubernetes.io/managed-by: Tiller
                                                                                                                                                          app.kubernetes.io/name: wkc-lite
                                                                                                                                                          chart: wkc-lite
                                                                                                                                                          helm.sh/chart: wkc-lite
                                                                                                                                                          heritage: Tiller
                                                                                                                                                          release: 0075-wkc-lite
                                                                                                                                                        name: wkc-routes-5588
                                                                                                                                                        namespace: $WKC_NAMESPACE
                                                                                                                                                      spec:
                                                                                                                                                        extensions: |
                                                                                                                                                          [
                                                                                                                                                            {
                                                                                                                                                                "extension_point_id": "zen_front_door",
                                                                                                                                                                "extension_name": "wkc-routes-extn-5588",
                                                                                                                                                                "details": {
                                                                                                                                                                  "location_conf": "wkc-routes-extn.conf"
                                                                                                                                                                }
                                                                                                                                                            }
                                                                                                                                                          ]
                                                                                                                                                        wkc-routes-extn.conf: |-
                                                                                                                                                          set_by_lua $nsdomain 'return os.getenv("NS_DOMAIN")';
                                                                                                                                                          location /metadata_enrichment/v3 {
                                                                                                                                                            proxy_set_header Host $host;
                                                                                                                                                            proxy_pass https://wkc-mde-service-manager-upstream;
                                                                                                                                                            proxy_ssl_verify       on;
                                                                                                                                                            proxy_ssl_trusted_certificate   /etc/internal-nginx-svc-tls/ca.crt;
                                                                                                                                                            proxy_ssl_protocols    TLSv1.2;
                                                                                                                                                            proxy_ssl_server_name  on;
                                                                                                                                                            proxy_ssl_name wkc-mde-service-manager.$nsdomain;
                                                                                                                                                          }
                                                                                                                                                      
      Replace <$WKC_NAMESPACE> with your clusters namespace.
    2. Create the new route:

      oc apply -f ./zenextension_wkc-routes-change.yaml
  5. Confirm that the patches are in place:
    1. Run the following command to set the project:

      oc project ${WKC_NAMESPACE}
    2. Confirm that the RSI patch is installed by running:

      ./cpd-cli manage get-rsi-patch-info --cpd_instance_ns=${WKC_NAMESPACE} --all

      The following is an example of the output:

      [
                                                                                                                                                                  {
                                                                                                                                                                      "creationTimestamp": "2023-09-15T19:17:24Z",
                                                                                                                                                                      "name": "rsi-pjm-scaling",
                                                                                                                                                                      "namespace": "wkc",
                                                                                                                                                                      "patch_info": [
                                                                                                                                                                          {
                                                                                                                                                                              "description": "This",
                                                                                                                                                                              "details": {
                                                                                                                                                                                  "patch_spec": [
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/cpu",
                                                                                                                                                                                          "value": "2"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/memory",
                                                                                                                                                                                          "value": "8Gi"
                                                                                                                                                                                      },
                                                                                                                                                                                      {
                                                                                                                                                                                          "op": "replace",
                                                                                                                                                                                          "path": "/spec/containers/0/resources/limits/ephemeral-storage",
                                                                                                                                                                                          "value": "12Gi"
                                                                                                                                                                                      }
                                                                                                                                                                                  ],
                                                                                                                                                                                  "pod_selector": {
                                                                                                                                                                                      "selector": {
                                                                                                                                                                                          "app": "portal-job-manager"
                                                                                                                                                                                      }
                                                                                                                                                                                  },
                                                                                                                                                                                  "state": "active",
                                                                                                                                                                                  "type": "json"
                                                                                                                                                                              },
                                                                                                                                                                              "display_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_name": "rsi-pjm-scaling",
                                                                                                                                                                              "extension_point_id": "rsi_pod_spec",
                                                                                                                                                                              "meta": {}
                                                                                                                                                                          }
                                                                                                                                                                      ]
                                                                                                                                                                  }
                                                                                                                                                              ]

      You can also double check the portal-job-manager pod (not deployment), and make sure new limits are set correctly:

      oc describe pod portal-job-manager-xxxxxxxxx-xxxxx

      Then, confirm that the following limitations are set:

          Limits:
                                                                                                                                                                                    cpu:                2
                                                                                                                                                                                    ephemeral-storage:  12Gi
                                                                                                                                                                                    memory:             8Gi
                                                                                                                                                                                  Requests:
                                                                                                                                                                                    cpu:                30m
                                                                                                                                                                                    ephemeral-storage:  10Mi
                                                                                                                                                                                    memory:             128Mi
                                                                                                                                                                              
    3. Confirm that the CCS CR patch is applied by checking the CCS-CR from the oc get ccs ccs-cr -o yaml results, under the spec section:

        catalog_api_jvm_args_extras: -Dfeature.disable_lineage_publishing=true -Dfeature.disable_rabbitmq_publishing=true
                                                                                                                                                        catalog_api_properties_enable_activity_tracker_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_publishing: "false"
                                                                                                                                                        catalog_api_properties_enable_global_search_rabbitmq_publishing: "false"
                                                                                                                                                        catalog_api_properties_global_call_logs: "false"
                                                                                                                                                        couchdb_search_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "2"
                                                                                                                                                            memory: 3Gi
                                                                                                                                                          requests:
                                                                                                                                                            cpu: 250m
                                                                                                                                                            memory: 256Mi
                                                                                                                                                        dap_base_asset_files_resources:
                                                                                                                                                          limits:
                                                                                                                                                            cpu: "4"
                                                                                                                                                            memory: 12Gi
    4. Confirm that the WKC CR patch is applied by checking the WKC-CR from the oc get wkc wkc-cr -o yaml results, under the spec section:

        wkc_data_rules_resources:
                                                                                                                                                          limits:
                                                                                                                                                            ephemeral-storage: 2Gi
    5. Confirm that the router change is applied by running:

      oc get zenextension wkc-routes-5588
      Then check to see if the the Status has changed from Inprogress to Completed.
  6. Scale the PJM replicas down 
    1. Put CCS into maintenance mode to prevent CCS reconcile:

      oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": true}}'
    2. Scale the replicas down by first scaling down the portal-job-manager using:

      oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=1
Next steps
 
Once you've completed patching and downloading the migration toolkit, continue with the steps outlined in Applying required Version 4.5 or 4.6 patches if you are still preparing and testing the migration.  
Or continue with the steps outlined in Applying Version 4.7 patches if you are running the migration.
 
Reverting changes
Reverting configuration changes for Migration  

Note: After the migration or if migration fails, you will need to revert these cluster tuning changes back.

Reverting changes for small-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
Reverting changes for medium-scale clusters
After the migration, you will need to run the following steps to rollback the changes done during the cluster tuning.
  1. Disable RSI for PJM:

    ./cpd-cli manage create-rsi-patch --cpd_instance_ns=${WKC_NAMESPACE}  --patch_name=pjm-scaling --state=inactive
  2. Remove the CCS changes for resource limits and feature settings. The effect of this command will take place after CCS has been taken out of maintenance mode in step 6.  
    Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you need any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type json -p '[{ "op": "remove", "path": "/spec/catalog_api_jvm_args_extras" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_activity_tracker_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_enable_global_search_rabbitmq_publishing" },{ "op": "remove", "path": "/spec/catalog_api_properties_global_call_logs" },{ "op": "remove", "path": "/spec/couchdb_search_resources" }]'
  3. Remove the WKC resource limit changes. Note: You need to double check whether any of the following parameters have been changed for the cluster before or not. If you needs any specific settings afterwards, you will need to reset those to the desired values.

    oc patch -n ${WKC_NAMESPACE} wkc wkc-cr --type json -p '[{ "op": "remove", "path": "/spec/wkc_data_rules_resources" }]'
  4. Rollback the router change. The newly created wkc-routes-5588 needs to be removed before the next update.
  5. Scale the PJM replicas back up by first scaling up the portal-job-manager by running:

    oc scale -n ${WKC_NAMESPACE} deployment portal-job-manager --replicas=3
  6. Disable CCS maintenance mode:

    oc patch -n ${WKC_NAMESPACE} ccs ccs-cr --type merge --patch '{"spec": {"ignoreForMaintenance": false}}'
     
 
Reverting the IIS image changes  

Follow these steps to revert the IIS image patches:
 
If needed, the IIS image overrides can be removed per the instructions below. Note: the migration toolkit does not need to be reverted.
 
To revert the IIS image overrides, proceed with the following steps. Note that ${PROJECT_CPD_INST_OPERANDS} refers to the project name where WKC is installed.
  1. Run the following command to edit the IIS custom resource:

    oc edit iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
  2. Remove the following lines within the IIS custom resource and save the change:

    iis_en_conductor_image:
                                                                                                                              name: is-engine-image@sha256
                                                                                                                              tag: sha256:45968624ade4260b89873ef5f17af403c0351ea58937ad25e146a59711e2ed4e
                                                                                                                              tag_metadata: b61-migration-b48
                                                                                                                            iis_en_compute_image:
                                                                                                                              name: is-en-compute-image@sha256
                                                                                                                              tag: sha256:3310e7ae7802510122e2e09b79e1fb3cf843ef69d8ddebb97147d0b726ce088b
                                                                                                                              tag_metadata: b61-migration-b48
                                                                                                                            iis_services_image:
                                                                                                                              name: is-services-image@sha256
                                                                                                                              tag: sha256:714b59324b04a15e005a10e245743ad984782930beea1b772021bcb1c1631e60
                                                                                                                              tag_metadata: b61-migration-b48
  3. Wait for the IIS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get iis iis-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the iis-services, is-en-conductor, and is-engine-compute pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with the original images.
 
Reverting the migration toolkit support image changes
 
Follow these steps to revert the migration toolkit support image patches:
 
  1. Run the following command to remove image updates from the Common Core Services (CCS) custom resource:

    oc patch ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove", "path": "/spec/image_digests/catalog_api_image"},{ "op": "remove","path": "/spec/image_digests/catalog_api_aux_image"},{ "op": "remove","path": "/spec/image_digests/portal_job_manager_image"}]'
  2. Run the following command to remove image updates from the WKC custom resource:

    oc patch wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS} --namespace ${PROJECT_CPD_INST_OPERANDS} --type=json --patch '[{ "op": "remove","path": "/spec/image_digests/wkc_data_rules_image"},{ "op": "remove","path": "/spec/image_digests/wdp_profiling_image"},{ "op": "remove","path": "/spec/image_digests/wkc_mde_service_manager_image"}]'
  3. Wait for the CCS operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get ccs ccs-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the catalog-api and portal-job-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
  4. Wait for the WKC operator reconciliation to complete. Run the following command to monitor the reconciliation status:

    oc get wkc wkc-cr -n ${PROJECT_CPD_INST_OPERANDS}
    After a period of time, the wkc-data-rules, wdp-profiling, and wkc-mde-service-manager pods in ${PROJECT_CPD_INST_OPERANDS} should be up and running with original images.
Re-sync processes
Re-sync processes
 
After migrating all legacy assets into the upgraded 4.7.3/4.7.4 clusters, you need to run through the following two steps to re-sync the processes to ensure imported assets are synced within Watson Knowledge Catalog (WKC).
Some re-sync jobs might take time to complete based on how many assets need to be re-synced during migration.
 
  1. To re-sync Global Search (GS) and Graph Lineage, download and run the following shell script: cpd_gs_graph_resync.sh
    1. The script asks for the namespace where WKC is installed.
    2. The script asks for the catalogs to sync. You can specify the catalog(s) where they import migration assets in, or press enter to select all catalogs to re-sync.
    3. The script creates two jobs to sync GS and Graph Lineage separately:
      • wkc-search-reindexing-job: for resync Global Search with Catalog
      • wkc-search-lineage-job: for resync Graph Lineage with Catalog
    4. Let the two jobs run. The re-sync will be done after the related job pods run to Completed state.
    5. You can check the re-sync progress by checking the related pod status or the log by running:

      oc get pod -n $WKC_NAMESPACE |grep wkc-search, or oc logs -n $WKC_NAMESPACE wkc-search-reindexing-job-xxxxx --tail 10.
  2. To re-sync Activities Lineage, follow the instructions outlined in this support page: How to re-synchronise and patch events in Activity Lineage post running migration
 

 
 
 
 
Known issues
 
Cloud Pak for Data 4.7.3/4.7.4
 
Empty business intelligence (BI) assets may still be created and cause import issues
If there are no BI assets in the legacy InfoSphere Information Server, empty ZIP files may still be created and exported. This can cause import issues during the migration.
To avoid this, remove the exported ZIP files manually if the export summary indicates that no BI assets were exported.
 
Workaround:
  1. Find the export summary file in the following migration pod:

    /data/cpd/data/exports/wkc/<export-instance-name>/<timestamp>/legacy-migration/bi/export-summary.json
    Read the summary file and determine if no BI assets were exported.
  2. Find the exported ZIP files in the following location:

    
                                            
    /data/cpd/data/exports/wkc/<export-instance-name>/<timestamp>/legacy-migration/bi/data/catalogs/catalog_hostname_XMETA.zip
    Delete the exported ZIP file.
 
Empty extended data source (EDS) assets may still be created and cause import issues
If there are no EDS assets in the legacy InfoSphere Information Server, the empty ZIP export files may still be created and exported. This can cause import issues during the migration.
To avoid this, remove the exported ZIP files manually if the export summary indicates that no EDS assets were exported.
 
Workaround:
  1. Find the export summary file in the following migration pod:

    
                                    
    /data/cpd/data/exports/wkc/<export-instance-name>/<timestamp>/legacy-migration/extended-data-sources/export-summary.json
    Read the summary file and determine if no BI assets were exported.
  2. Find the exported ZIP files in the following locations: 

    
                                            
    /data/cpd/data/exports/wkc/<export-instance-name>/<timestamp>/legacy-migration/extended-data-sources/assettypes/data/catalogs/catalog.hostname_XMETA.zip
    and
    
                                                    
    /data/cpd/data/exports/wkc/<export-instance-name>/<timestamp>/legacy-migration/extended-data-sources/assets/data/catalogs/catalog.hostname_XMETA.zip
    Delete both exported ZIP files.
     
Empty extension mapping document (EMD) assets may still be created and cause import issues
If there are no EMD assets in the legacy InfoSphere Information Server, empty ZIP files may still be created and exported. This can cause import issues during the migration.
To avoid this, remove the exported ZIP files manually if the export summary indicates that no EMD assets were exported.
 
Workaround:
  1. Find the export summary file in the following migration pod:

    
                                    
    /data/cpd/data/exports/wkc/<export-instance-name>/<timestamp>/legacy-migration/extension-mapping-documents/export-summary.json
    Read the summary file and determine if no BI assets were exported.
  2. Find the exported ZIP files in the following location:

    /data/cpd/data/exports/wkc/<export-instance-name>/<timestamp>/legacy-migration/extension-mapping-documents/data/catalogs/catalog_hostname_XMETA.zip
    Delete the exported ZIP file.
 
 

[{"Type":"MASTER","Line of Business":{"code":"LOB76","label":"Data Platform"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSHGYS","label":"IBM Cloud Pak for Data"},"ARM Category":[{"code":"a8m3p000000UoRRAA0","label":"Administration-\u003EUpgrade"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]

Document Information

Modified date:
21 November 2025

UID

ibm17003929