Known issues and limitations for watsonx Assistant

The following known issues and limitations apply to watsonx Assistant.

The deploy-knative-eventing fails with error: multiNamespace InstallModeType not supported

Applies to: 5.2.0, 5.2.1, 5.2.2

Problem
This issue arises due to the interaction between the namespace scoping approach used by the deploy-knative-eventing installation and the default behavior of the IBM Namespace Scope Operator when users execute the setup-instance-topology command.
Solution
To remove the error messages, you can modify the setting to allow the MultiNamespace InstallModeType in the operator:
  1. Edit the namespace scope operator csv file:
    oc edit csv ibm-namespace-scope-operator.v{$CPD_VERSION} -n ibm-knative-events
  2. Set the MultiNamespace parameter to true:
      
      installModes:
      - supported: true
        type: MultiNamespace
  3. Delete the old namespace-scope-operator pod.
    
    nss_op_pod=$(oc get pods -n ibm-knative-events -l name=ibm-namespace-scope-operator --no-headers | awk '{print $1}')
    oc delete pod $nss_op_pod
  4. Rerun the following command:
    cpd-cli manage setup-instance-topology --release=${VERSION} --cpd_operator_ns=ibm-knative-events --cpd_instance_ns=knative-eventing --license_acceptance=true

The deploy-knative-eventing fails with error: no matching resources found

Applies to: 5.2.0 and 5.2.1

Problem
When running the cpd-cli manage deploy-knative-eventing command, it fails with error: no matching resources found after the message deployment.apps/kafka-controller condition met. This issue arises because no pods with the label app=kafka-broker-dispatcher are present.
Solution
  1. Exec into the docker pod running olm utils.
    docker exec -it olm-utils-play-v3 bash
  2. Check for the line that is to be removed.
    cat /opt/ansible/bin/deploy-knative-eventing | grep kafka-broker-dispatcher
    Output:
    cat /opt/ansible/bin/deploy-knative-eventing | grep kafka-broker-dispatcher
    oc wait pods -n knative-eventing --selector app=kafka-broker-dispatcher --for condition=Ready --timeout=60s
  3. Remove the line.
    sed -i '/kafka-broker-dispatcher/d' /opt/ansible/bin/deploy-knative-eventing
  4. Verify whether the line is removed.
    cat /opt/ansible/bin/deploy-knative-eventing | grep kafka-broker-dispatcher
    Output:
    cat /opt/ansible/bin/deploy-knative-eventing | grep kafka-broker-dispatcher
    

Kafka patch command fails in deploy-knative-eventing step

Applies to: 5.2.0

Problem
When two Custom Resource Definition(CRD) with short name kafka is found in the environment, the following command that is run as part of deploy-knative-eventing step fails.
oc patch kafka knative-eventing-kafka -n knative-eventing --type=merge '-p={"spec":{"kafka":{"jvmOptions":{"javaSystemProperties":[{"name": "jdk.nativeDigest", "value": "false"}]}}}}' 
Solution
Run the following command for the knative stack to come up fully and watsonx Assistant deployment to go to fully verified state:
oc patch kafkas.ibmevents.ibm.com knative-eventing-kafka -n knative-eventing --type='merge' -p='{"spec":{"kafka":{"jvmOptions":{"javaSystemProperties":[{"name": "jdk.nativeDigest", "value": "false"}]}}}}' 

watsonx Assistant upgrade results in CR error

Applies to: 5.2.0

Problem
After you upgrade watsonx Assistant, the watsonx Assistant custom resource (CR) fails with an error. This error occurs because the values for blockStorageClass and storageClassName are not set during the upgrade process. When these values are null, the watsonx Assistant operator cannot reconcile the CR and the deployment fails.
Solution
  1. Check the following paths in the watsonx Assistant CR:
    .spec.cluster.blockStorageClass
    
    .spec.cluster.storageClassName
  2. Run the following command to inspect the CR:
    oc get watsonassistants.assistant.watson.ibm.com wa -n cpd -o yaml | grep -A 7 "cluster:"
    If the fields blockStorageClass or storageClassName is missing or set to null, the CR fails the validation.
  3. To resolve the issue, pass the appropriate storage class values in the upgrade command by using the following flags:
    cpd-cli manage apply-cr \
    --components=watson_assistant \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
    --param-file=/tmp/work/install-options.yml \
    --block_storage_class <your-block-storage-class> \
    --file_storage_class <your-file-storage-class>
    --license_acceptance=true \
    --upgrade=true
    These values must match the ones that are used in your current deployment.

Preview page not available when the watsonx Assistant is created using the API

Applies to: 5.2.0

Problem
The Preview page is not accessible when watsonx Assistant is created using the API.

Preview page not available for Watson Discovery integration

Applies to: 5.2.0

Problem
The Preview page does not appear when you integrate Watson Discovery in watsonx Assistant.

Postgres pod goes to CrashLoopBackOff status after the upgrade

Applies to: 5.2.0

Problem
When you upgrade watsonx Assistant, one of the Postgres pods goes to CrashLoopBackOff. This issue occurs because your data is corrupted.
Solution
  1. Run the following command to find the watsonx Assistant Postgres pod in the CrashLoopBackOff state.
    oc get pods --no-headers | grep -Ev "Comp|0/0|1/1|2/2|3/3|4/4|5/5|6/6|7/7|8/8" | grep wa-postgres
    The output looks like this:
    wa-postgres-3 0/1 CrashLoopBackOff 115 (2m30s ago) 9h
  2. Run the following command to identify if the Postgres pod is the primary pod:
    oc get cluster | grep wa-postgres
    The output looks like this:
    oc get cluster | grep wa-postgres
    wa-postgres         2d20h   3           3       Cluster in healthy state   wa-postgres-1
    Where wa-postgres-1 is the primary pod.
    Tip: If the primary instance is in CrashLoopBackOff status, do the steps in the Postgres cluster in bad state topic.
  3. Delete the non-primary pod and its PersistentVolumeClaim (PVC) to create a new pod that syncs with the primary pod.
    Warning: Do not delete a primary pod because it can lead to database downtime and potential data loss.
    oc delete pod/wa-postgres-3 pvc/wa-postgres-3
    Important: Ensure that the EDB operator is running before deleting the pod and its PVC.