Known issues and limitations for watsonx Assistant
The following known issues and limitations apply to watsonx Assistant.
- The deploy-knative-eventing fails with error: multiNamespace InstallModeType not supported
- The deploy-knative-eventing fails with error: no matching resources found
- Kafka patch command fails in deploy-knative-eventing step
- watsonx Assistant upgrade results in CR error
- Preview page not available when the watsonx Assistant is created using the API
- Preview page not available for Watson Discovery integration
- Postgres pod goes to CrashLoopBackOff status after the upgrade
For a complete list of known issues and troubleshooting information for all versions of watsonx Assistant, see Troubleshooting known issues. For a complete list of known issues for IBM® Software Hub, see Limitations and known issues in IBM Software Hub.
The deploy-knative-eventing fails with error: multiNamespace
InstallModeType not supported
Applies to: 5.2.0, 5.2.1, 5.2.2
- Problem
- This issue arises due to the interaction between the namespace scoping approach used by the
deploy-knative-eventinginstallation and the default behavior of the IBM Namespace Scope Operator when users execute thesetup-instance-topologycommand. - Solution
- To remove the error messages, you can modify the setting to allow the
MultiNamespace InstallModeTypein the operator:- Edit the namespace scope operator csv
file:
oc edit csv ibm-namespace-scope-operator.v{$CPD_VERSION} -n ibm-knative-events - Set the
MultiNamespaceparameter to true:installModes: - supported: true type: MultiNamespace - Delete the old
namespace-scope-operatorpod.nss_op_pod=$(oc get pods -n ibm-knative-events -l name=ibm-namespace-scope-operator --no-headers | awk '{print $1}') oc delete pod $nss_op_pod - Rerun the following
command:
cpd-cli manage setup-instance-topology --release=${VERSION} --cpd_operator_ns=ibm-knative-events --cpd_instance_ns=knative-eventing --license_acceptance=true
- Edit the namespace scope operator csv
file:
The deploy-knative-eventing fails with error: no matching resources
found
Applies to: 5.2.0 and 5.2.1
- Problem
- When running the
cpd-cli manage deploy-knative-eventingcommand, it fails witherror: no matching resources foundafter the messagedeployment.apps/kafka-controller condition met. This issue arises because no pods with the labelapp=kafka-broker-dispatcherare present. - Solution
-
- Exec into the docker pod running
olm utils.docker exec -it olm-utils-play-v3 bash - Check for the line that is to be
removed.
Output:cat /opt/ansible/bin/deploy-knative-eventing | grep kafka-broker-dispatchercat /opt/ansible/bin/deploy-knative-eventing | grep kafka-broker-dispatcher oc wait pods -n knative-eventing --selector app=kafka-broker-dispatcher --for condition=Ready --timeout=60s - Remove the
line.
sed -i '/kafka-broker-dispatcher/d' /opt/ansible/bin/deploy-knative-eventing - Verify whether the line is
removed.
Output:cat /opt/ansible/bin/deploy-knative-eventing | grep kafka-broker-dispatchercat /opt/ansible/bin/deploy-knative-eventing | grep kafka-broker-dispatcher
- Exec into the docker pod running
Kafka patch command fails in deploy-knative-eventing step
Applies to: 5.2.0
- Problem
- When two Custom Resource Definition(CRD) with short name
kafkais found in the environment, the following command that is run as part ofdeploy-knative-eventingstep fails.oc patch kafka knative-eventing-kafka -n knative-eventing --type=merge '-p={"spec":{"kafka":{"jvmOptions":{"javaSystemProperties":[{"name": "jdk.nativeDigest", "value": "false"}]}}}}' - Solution
- Run the following command for the
knativestack to come up fully and watsonx Assistant deployment to go to fully verified state:oc patch kafkas.ibmevents.ibm.com knative-eventing-kafka -n knative-eventing --type='merge' -p='{"spec":{"kafka":{"jvmOptions":{"javaSystemProperties":[{"name": "jdk.nativeDigest", "value": "false"}]}}}}'
watsonx Assistant upgrade results in CR error
Applies to: 5.2.0
- Problem
- After you upgrade watsonx Assistant, the watsonx Assistant custom resource (CR) fails with
an error. This error occurs because the values for
blockStorageClassandstorageClassNameare not set during the upgrade process. When these values are null, the watsonx Assistant operator cannot reconcile the CR and the deployment fails. - Solution
-
- Check the following paths in the watsonx Assistant
CR:
.spec.cluster.blockStorageClass .spec.cluster.storageClassName - Run the following command to inspect the
CR:
If the fieldsoc get watsonassistants.assistant.watson.ibm.com wa -n cpd -o yaml | grep -A 7 "cluster:"blockStorageClassorstorageClassNameis missing or set to null, the CR fails the validation. - To resolve the issue, pass the appropriate storage class values in the upgrade command by using
the following flags:
These values must match the ones that are used in your current deployment.cpd-cli manage apply-cr \ --components=watson_assistant \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --param-file=/tmp/work/install-options.yml \ --block_storage_class <your-block-storage-class> \ --file_storage_class <your-file-storage-class> --license_acceptance=true \ --upgrade=true
- Check the following paths in the watsonx Assistant
CR:
Preview page not available when the watsonx Assistant is created using the API
Applies to: 5.2.0
- Problem
- The Preview page is not accessible when watsonx Assistant is created using the API.
Preview page not available for Watson Discovery integration
Applies to: 5.2.0
- Problem
- The
Previewpage does not appear when you integrate Watson Discovery in watsonx Assistant.
Postgres pod goes to CrashLoopBackOff status after the upgrade
Applies to: 5.2.0
- Problem
- When you upgrade watsonx Assistant, one of the Postgres pods goes to
CrashLoopBackOff. This issue occurs because your data is corrupted. - Solution
-
- Run the following command to find the watsonx Assistant Postgres pod in the
CrashLoopBackOffstate.
The output looks like this:oc get pods --no-headers | grep -Ev "Comp|0/0|1/1|2/2|3/3|4/4|5/5|6/6|7/7|8/8" | grep wa-postgreswa-postgres-3 0/1 CrashLoopBackOff 115 (2m30s ago) 9h - Run the following command to identify if the Postgres pod is the primary
pod:
The output looks like this:oc get cluster | grep wa-postgres
Whereoc get cluster | grep wa-postgres wa-postgres 2d20h 3 3 Cluster in healthy state wa-postgres-1wa-postgres-1is the primary pod.Tip: If the primary instance is inCrashLoopBackOffstatus, do the steps in the Postgres cluster in bad state topic. - Delete the
non-primarypod and its PersistentVolumeClaim (PVC) to create a new pod that syncs with the primary pod.Warning: Do not delete a primary pod because it can lead to database downtime and potential data loss.oc delete pod/wa-postgres-3 pvc/wa-postgres-3Important: Ensure that the EDB operator is running before deleting the pod and its PVC.
- Run the following command to find the watsonx Assistant Postgres pod in the