Known issues and limitations for watsonx.governance

The following known issues and limitations apply to watsonx.governance.

These known issues and limitations apply specifically to the watsonx.governance service. You can also check the known issues for the component services of AI Factsheets, Watson OpenScale, and IBM OpenPages.

Known issues

Upgrade to 5.0.x fails with an error

Applies to: 5.0.x

What's happening

When you attempt to upgrade from any previous version to version 5.0.3, it fails with the following message:

message: "The conditional check '( \"ReadWriteOnce\" == dbupgrade_logs_pvc.resources[0].spec.accessModes[0]
   )' failed. The error was: error while evaluating conditional (( \"ReadWriteOnce\"
   == dbupgrade_logs_pvc.resources[0].spec.accessModes[0] )): list object
   has no element 0. list object has no element 0\n\nThe error appears
   to be in '/opt/ansible/branch/roles/openpagesinstance/tasks/upgrade.yml':
   line 28, column 7, but may\nbe elsewhere in the file depending on
   the exact syntax problem.\n\nThe offending line appears to be:\n\n
   \ block:\n    - name: Delete upgrade log PVC when accessmode is RWO\n
   \     ^ here\n\nopenpagesinstance role has failed. See earlier output
   for exact error."
   reason: Failed

How to fix it

Complete the following steps only if you see multiple db-upgrade pods running. For example:

    op-1234567891234567-db-upgrade-2btsl        1/1     Running     0               22h
    op-1234567891234567-db-upgrade-4jk79        1/1     Running     6 (36h ago)     39h
  1. Delete all the db-upgrade pods.
oc delete po $(oc get po -lapp=openpages | grep "db-upgrade" | awk '{ print $1 }' | tr '\n' ' ')
  1. Wait for the operator to recreate the upgrade job. Ensure only one pod is running.
  2. Delete the operator so that it does not create another db-upgrade pod.
oc delete clusterserviceversion.operators.coreos.com/ibm-cpd-openpages-operator.v7.2.0 subscription.operators.coreos.com/ibm-cpd-openpages-operator -n ${PROJECT_CPD_INST_OPERATORS}
  1. After the db-upgrade job finishes, recreate the operator from the Operator Hub on Openshift console in the operator namespace.

Object properties are reverted to their defaults after an upgrade

Applies to: 5.0.x

This issue can occur in the following cases:

  • You upgrade watsonx.governance from 4.8.x to 5.0.x
  • You integrate an existing OpenPages instance with watsonx.governance.

What's happening

The properties of object types do not reflect the values that you configured. The properties are using the default values instead of your custom values. This issue can impact the following properties:

  • The property that enables the Title component
  • The property that makes the Title field required
  • The property that sets how the Name field is displayed
  • The property that enables tagging for an object type

How to fix it

To fix this issue, re-apply the custom properties manually.

To enable tagging for an object type, do the following steps:

  1. Open the Administration menu and click Solutions > Tags.
  2. In the System Settings panel, select the object types to re-enable tagging.

To set other object type properties, do the following steps:

  1. Open the Administration menu and click Solutions > Object Types.
  2. Click the object type, and then click Edit.
  3. Update the configuration. For example, to re-enable the Title compoment, click Enable Title Component, and then select a name display option. You can also make the Title required or optional.

Adding users to an AI use case does not allow for search in groups

Applies to: 5.0.0 and later

When you add users as members to an AI Use Case asset, you cannot search for users that belong to groups. You must add users individually.

Error when you click the Model object type

Applies to: 5.0.2 and later

What's happening

When you go to the Administration menu and click Solution Configuration > Object Types and click Model, you get the following error:

WAF1GHQYECTE - OP-00002
The requested operation could not be completed.

How to fix it

To resolve this issue, do the following steps:

  1. Get the name of the RabbitMQ pod and the OpenPages application server pod:

    oc get po - licpdsupport/addOnId=openpages,icpdsupport/app=rabbitmq-server
    oc get po -lapp=openpages,icpdsupport/module=openpages-app
    
  2. Restart the RabbitMQ pod. Wait for the process to complete.

    oc delete po -lrelease=openpages-<instance_name>-<instance_id> -l icpdsupport/app=rabbitmq-server
    
  3. Restart the OpenPages application server pod:

    1. Scale down to 0 replicas:
    oc scale --replicas=0 sts/openpages-<instance_name>-sts
    
    1. Wait until all application server pods are deleted.
    2. Scale up to the number of replicas you want to use for the application server:
    oc scale --replicas=<#_of_replicas> sts/openpages-<instance_name>-sts
    

Custom facts do not display as expected in exported factsheet PDF

Applies to: 5.0.1 and later

If you create custom facts using the API, then add values for the custom facts, the values display inconsistently when you export a PDF report for the factsheet. The sequence is as follows:

  1. Track an AI asset in an AI use case.
  2. Create custom facts using API calls and add values.
  3. Export a report of the asset factsheet as a PDF file.

You will not see the custom values in the Additional details section of the asset factsheet report as expected. Instead, export a PDF report for the AI use case. The custom values display on the Life cycle page of the exported PDF.

Unable to generate report from deployment space for a detached prompt template

Applies to: Cloud Pak for Data 5.0

If a deployment space contains a detached prompt template that was promoted from a project, exporting a report from the associated factsheet might fail with this error:

Model export error: An error occurred wile exporting the model.

One known condition for this failure happens when the factsheet contains attachments for the AI asset in project. To resolve the problem, make sure you can access the project containing the prompt template with at least Viewer access.

Metrics computed for a prompt template on payload and feedback data are not synced completely to Governance console

Applies to: Cloud Pak for Data 5.0

If you are tracking a prompt template in an AI use case synced with the Governance console, metrics for an evaluation in a production space that uses both feedback and payload data do not sync the metric info computed on payload data to Governance console. The following steps illustrate the problem.

  1. Track a prompt template asset to a use case synced with Governance console.
  2. Promote the prompt template to a production deployment space.
  3. Create a new deployment for the prompt template.
  4. Evaluate the prompt template, using both feedback and payload data. For example, evaluate the output for the Fleisch readability score.
  5. Review the factsheet for the results of the evaluation. You will see metric values for both payload and feedback data.
  6. On the Governance console, the metrics show a value for the feedback data only. No result for the payload data displays.

Unable to open OpenPages after you enable the integration with Governance console

Applies to: Cloud Pak for Data 5.0.0, 5.0.1, and 5.0.2
Fixed in: Cloud Pak for Data 5.0.3

When you enable the integration with Governance console, you get the following error message:

Error occurred while setting OpenPages.
Operation failed due to an unexpected error.

Information is not synched to the Governance console.

This issue occurs when you enable the integration with Governance console and you have foundation models that support the Translation task. The Translation task is missing from the following fields in Governance console:

  • MRG-Model:Approved Tasks
  • MRG-Model:Supported Tasks
  • MRG-Model:Task Type

To resolve this issue, do the following steps to update the fields:

  1. Log in to Governance console as an administrator.
  2. Enable System Admin Mode.
  3. Click the Administration menu Settings and select Solution Configuration > Object Types.
  4. Click Model.
  5. Click Fields, and then find the MRG-Model field group.
  6. Click the Approved Tasks field.
  7. Under Enumerated string values, click New Value.
  8. Type Translation for both the name and label, and then click Create.
  9. Click Done.
  10. Repeat steps 6-10 for the Supported Tasks and Task Type fields.
  11. Disable System Admin Mode.

Limitations

Short text responses generate lower answer relevance scores

When your LLM model generates responses for retrieval augmented generation (RAG) tasks with short or single word answers to prompts, your prompt template evaluation might calculate answer relevance metric scores with lower values.

Applies to: 5.0.3

Uploading test data with CSV files for prompt template evaluations is not supported

When you select test data for prompt template evaluations, watsonx.governance does not support uploading CSV files from projects and spaces.

Applies to: 5.0.3

Scan files for malicious content

Files you upload are not automatically checked for malicious content. Before you upload a file, run a static scan against the file to ensure it does not contain malicious content.

Factsheets are not available for tuned models or for prompt templates for tuned models

Applies to: 5.0.0 and later

Currently, you cannot track the details for a tuned model asset or for the prompt template for a tuned model in an AI use case.

Generative AI quality evaluations generate answer quality metrics with feedback data only

Applies to: 5.0.0
Fixed in: 5.0.1

When you evaluate prompt templates in watsonx.governance, you must provide feedback data to calculate answer quality metrics during generative AI quality evaluations to measure how well your model performs retrieval-augmented generation (RAG) tasks. If you provide payload data, watsonx.governance can not calculate answer quality metrics.