Known issues and limitations for watsonx.governance
The following known issues and limitations apply to watsonx.governance.
These known issues and limitations apply specifically to the watsonx.governance service. You can also check the known issues for the component services of AI Factsheets, Watson OpenScale, and IBM OpenPages.
Known issues
Upgrade to 5.0.x fails with an error
Applies to: 5.0.x
What's happening
When you attempt to upgrade from any previous version to version 5.0.3, it fails with the following message:
message: "The conditional check '( \"ReadWriteOnce\" == dbupgrade_logs_pvc.resources[0].spec.accessModes[0]
)' failed. The error was: error while evaluating conditional (( \"ReadWriteOnce\"
== dbupgrade_logs_pvc.resources[0].spec.accessModes[0] )): list object
has no element 0. list object has no element 0\n\nThe error appears
to be in '/opt/ansible/branch/roles/openpagesinstance/tasks/upgrade.yml':
line 28, column 7, but may\nbe elsewhere in the file depending on
the exact syntax problem.\n\nThe offending line appears to be:\n\n
\ block:\n - name: Delete upgrade log PVC when accessmode is RWO\n
\ ^ here\n\nopenpagesinstance role has failed. See earlier output
for exact error."
reason: Failed
How to fix it
Complete the following steps only if you see multiple db-upgrade pods running. For example:
op-1234567891234567-db-upgrade-2btsl 1/1 Running 0 22h
op-1234567891234567-db-upgrade-4jk79 1/1 Running 6 (36h ago) 39h
- Delete all the db-upgrade pods.
oc delete po $(oc get po -lapp=openpages | grep "db-upgrade" | awk '{ print $1 }' | tr '\n' ' ')
- Wait for the operator to recreate the upgrade job. Ensure only one pod is running.
- Delete the operator so that it does not create another db-upgrade pod.
oc delete clusterserviceversion.operators.coreos.com/ibm-cpd-openpages-operator.v7.2.0 subscription.operators.coreos.com/ibm-cpd-openpages-operator -n ${PROJECT_CPD_INST_OPERATORS}
- After the db-upgrade job finishes, recreate the operator from the Operator Hub on Openshift console in the operator namespace.
Object properties are reverted to their defaults after an upgrade
Applies to: 5.0.x
This issue can occur in the following cases:
- You upgrade watsonx.governance from 4.8.x to 5.0.x
- You integrate an existing OpenPages instance with watsonx.governance.
What's happening
The properties of object types do not reflect the values that you configured. The properties are using the default values instead of your custom values. This issue can impact the following properties:
- The property that enables the Title component
- The property that makes the Title field required
- The property that sets how the Name field is displayed
- The property that enables tagging for an object type
How to fix it
To fix this issue, re-apply the custom properties manually.
To enable tagging for an object type, do the following steps:
- Open the Administration menu and click Solutions > Tags.
- In the System Settings panel, select the object types to re-enable tagging.
To set other object type properties, do the following steps:
- Open the Administration menu and click Solutions > Object Types.
- Click the object type, and then click Edit.
- Update the configuration. For example, to re-enable the Title compoment, click Enable Title Component, and then select a name display option. You can also make the Title required or optional.
Adding users to an AI use case does not allow for search in groups
Applies to: 5.0.0 and later
When you add users as members to an AI Use Case asset, you cannot search for users that belong to groups. You must add users individually.
Error when you click the Model object type
Applies to: 5.0.2 and later
What's happening
When you go to the Administration menu and click Solution Configuration > Object Types and click Model, you get the following error:
WAF1GHQYECTE - OP-00002
The requested operation could not be completed.
How to fix it
To resolve this issue, do the following steps:
-
Get the name of the RabbitMQ pod and the OpenPages application server pod:
oc get po - licpdsupport/addOnId=openpages,icpdsupport/app=rabbitmq-server oc get po -lapp=openpages,icpdsupport/module=openpages-app
-
Restart the RabbitMQ pod. Wait for the process to complete.
oc delete po -lrelease=openpages-<instance_name>-<instance_id> -l icpdsupport/app=rabbitmq-server
-
Restart the OpenPages application server pod:
- Scale down to 0 replicas:
oc scale --replicas=0 sts/openpages-<instance_name>-sts
- Wait until all application server pods are deleted.
- Scale up to the number of replicas you want to use for the application server:
oc scale --replicas=<#_of_replicas> sts/openpages-<instance_name>-sts
Custom facts do not display as expected in exported factsheet PDF
Applies to: 5.0.1 and later
If you create custom facts using the API, then add values for the custom facts, the values display inconsistently when you export a PDF report for the factsheet. The sequence is as follows:
- Track an AI asset in an AI use case.
- Create custom facts using API calls and add values.
- Export a report of the asset factsheet as a PDF file.
You will not see the custom values in the Additional details section of the asset factsheet report as expected. Instead, export a PDF report for the AI use case. The custom values display on the Life cycle page of the exported PDF.
Unable to generate report from deployment space for a detached prompt template
Applies to: Cloud Pak for Data 5.0
If a deployment space contains a detached prompt template that was promoted from a project, exporting a report from the associated factsheet might fail with this error:
Model export error: An error occurred wile exporting the model.
One known condition for this failure happens when the factsheet contains attachments for the AI asset in project. To resolve the problem, make sure you can access the project containing the prompt template with at least Viewer access.
Metrics computed for a prompt template on payload and feedback data are not synced completely to Governance console
Applies to: Cloud Pak for Data 5.0
If you are tracking a prompt template in an AI use case synced with the Governance console, metrics for an evaluation in a production space that uses both feedback and payload data do not sync the metric info computed on payload data to Governance console. The following steps illustrate the problem.
- Track a prompt template asset to a use case synced with Governance console.
- Promote the prompt template to a production deployment space.
- Create a new deployment for the prompt template.
- Evaluate the prompt template, using both feedback and payload data. For example, evaluate the output for the Fleisch readability score.
- Review the factsheet for the results of the evaluation. You will see metric values for both payload and feedback data.
- On the Governance console, the metrics show a value for the feedback data only. No result for the payload data displays.
Unable to open OpenPages after you enable the integration with Governance console
Applies to: Cloud Pak for Data 5.0.0, 5.0.1, and 5.0.2
Fixed in: Cloud Pak for Data 5.0.3
When you enable the integration with Governance console, you get the following error message:
Error occurred while setting OpenPages.
Operation failed due to an unexpected error.
Information is not synched to the Governance console.
This issue occurs when you enable the integration with Governance console and you have foundation models that support the Translation task. The Translation task is missing from the following fields in Governance console:
- MRG-Model:Approved Tasks
- MRG-Model:Supported Tasks
- MRG-Model:Task Type
To resolve this issue, do the following steps to update the fields:
- Log in to Governance console as an administrator.
- Enable System Admin Mode.
- Click the Administration menu and select Solution Configuration > Object Types.
- Click Model.
- Click Fields, and then find the MRG-Model field group.
- Click the Approved Tasks field.
- Under Enumerated string values, click New Value.
- Type Translation for both the name and label, and then click Create.
- Click Done.
- Repeat steps 6-10 for the Supported Tasks and Task Type fields.
- Disable System Admin Mode.
Limitations
Short text responses generate lower answer relevance scores
When your LLM model generates responses for retrieval augmented generation (RAG) tasks with short or single word answers to prompts, your prompt template evaluation might calculate answer relevance metric scores with lower values.
Applies to: 5.0.3
Uploading test data with CSV files for prompt template evaluations is not supported
When you select test data for prompt template evaluations, watsonx.governance does not support uploading CSV files from projects and spaces.
Applies to: 5.0.3
Scan files for malicious content
Files you upload are not automatically checked for malicious content. Before you upload a file, run a static scan against the file to ensure it does not contain malicious content.
Factsheets are not available for tuned models or for prompt templates for tuned models
Applies to: 5.0.0 and later
Currently, you cannot track the details for a tuned model asset or for the prompt template for a tuned model in an AI use case.
Generative AI quality evaluations generate answer quality metrics with feedback data only
Applies to: 5.0.0
Fixed in: 5.0.1
When you evaluate prompt templates in watsonx.governance, you must provide feedback data to calculate answer quality metrics during generative AI quality evaluations to measure how well your model performs retrieval-augmented generation (RAG) tasks. If you provide payload data, watsonx.governance can not calculate answer quality metrics.