Known issues and limitations for watsonx.governance

The following known issues and limitations apply to watsonx.governance.

These known issues and limitations apply specifically to the watsonx.governance service. You can also check the known issues for the component services of AI Factsheets, Watson OpenScale, and IBM OpenPages.

Known issues:

Limitations:

Known issues

Evaluation job fails when CSV file name contains spaces

If the name of a CSV file uploaded in the project UI contains spaces, the evaluation job may fail to run.

Workaround: Rename the CSV file to remove all spaces from the file name before uploading it

Applies to: 5.1.x

Synchronization issues when prompt teamplates have the same name

Applies to: 5.1.x

This issue occurs when:

  • You are tracking prompt templates in AI use cases that are synced with Governance console.
  • You have two or more prompt templates with the same name in watsonx.governance.

When the sync job identifies prompt templates with the same name, it applies autonaming to the subsequent templates to give each prompt template a unique ID. As a result, the prompt templates with duplicate names appear in Governance console with an auto-generated ID instead of the name from watsonx.governance.

Note: If autonaming is not enabled, the sync job fails with an error that the prompt template name already exists.

Unable to create a use case in watsonx.governance

Applies to: 5.1.3

What's happening

When you create an AI use case in watsonx.governance, you get the following error:

AI Use Case creation failed. We encountered a problem. Cannot invoke " because the return value of " is null

How to fix it

If OpenPages is installed in your environmenet, enable the integration with OpenPages and then disable it.

If OpenPages is not installed in your environmenet, run the following command as an admin user:

curl -X 'PATCH'
'your cpd environment url/v1/aigov/model_inventory/grc/config'
-H 'accept: /'
-H 'Authorization: Your auth token'

For information about authorization tokens, see Generating an API authorization token

Maximum sample size setting can not be configured for drift v2 evaluations

You can't configure the maximum sample size setting after you configured drift v2 evaluations. By default, the maximum sample size is set to 1000 after you configure drift v2 evaluations.

You can work around this issue by configuring the maximum sample size on the Advanced settings screen while you are configuring drift v2 evaluations. You can also specify any value for the maximum sample size that is greater than the minimum sample size.

Asset browser contains duplicate file

Applies to: 5.1.2

When you attempt to select a test data file for evaluating prompts in a project, the asset browser might display duplicate files.

To work around this issue, you can select one of the duplicate files to continue the evaluation.

Local location setting is not available for system setup

Applies to: 5.1.1

Fixed in: 5.1.2

When you are configuring machine learning model evaluations, you can not select Local in the Location menu to specify a location for your machine learning provider on the system setup page.

To work around this issue, you must select Local as the location when you first configure the system setup.

Evaluation Studio fails to detect test data columns

Applies to: 5.1.2

If you provide test data with CSV files that use the semicolon or pipe delimiter formats, Evaluation Studio fails to configure your experiment and displays an error that indicates that the label column is not present in the data.

To fix this issue, you can provide your test data with a CSV file that usesthe comma delimiter format.

Upgrade to 5.1.x fails with an error

Applies to: 5.1.x

What's happening

When you attempt to upgrade from any previous version to version 5.1, it fails with the following message:

message: "The conditional check '( \"ReadWriteOnce\" == dbupgrade_logs_pvc.resources[0].spec.accessModes[0]
   )' failed. The error was: error while evaluating conditional (( \"ReadWriteOnce\"
   == dbupgrade_logs_pvc.resources[0].spec.accessModes[0] )): list object
   has no element 0. list object has no element 0\n\nThe error appears
   to be in '/opt/ansible/branch/roles/openpagesinstance/tasks/upgrade.yml':
   line 28, column 7, but may\nbe elsewhere in the file depending on
   the exact syntax problem.\n\nThe offending line appears to be:\n\n
   \ block:\n    - name: Delete upgrade log PVC when accessmode is RWO\n
   \     ^ here\n\nopenpagesinstance role has failed. See earlier output
   for exact error."
   reason: Failed

How to fix it

Complete the following steps only if you see multiple db-upgrade pods running. For example:

    op-1234567891234567-db-upgrade-2btsl        1/1     Running     0               22h
    op-1234567891234567-db-upgrade-4jk79        1/1     Running     6 (36h ago)     39h
  1. Delete all the db-upgrade pods.
oc delete po $(oc get po -lapp=openpages | grep "db-upgrade" | awk '{ print $1 }' | tr '\n' ' ')
  1. Wait for the operator to recreate the upgrade job. Ensure only one pod is running.
  2. Delete the operator so that it does not create another db-upgrade pod.
oc delete clusterserviceversion.operators.coreos.com/ibm-cpd-openpages-operator.v7.2.0 subscription.operators.coreos.com/ibm-cpd-openpages-operator -n ${PROJECT_CPD_INST_OPERATORS}
  1. After the db-upgrade job finishes, recreate the operator from the Operator Hub on Openshift console in the operator namespace.

watsonx.governance upgrade fails when Custom Resource is in reconciliation

Applies to: 5.1.0 - 5.1.1

Fixed in: 5.1.2 or later

What's happening

When you upgrade watsonx.governance during an operation when Custom Resource (CR) is in reconciliation, such as an install operation, your upgrade will fail. You can only upgrade after the previous operation is completed.

How to fix it

To enforce an upgrade during an installation operation:

  1. Set the operator status to "Completed" with the following command:
    oc patch cm operation-configmap-${OPENPAGES_INSTANCE_NAME} -n <operand namespace> -p "{\"data\": {\"status\": \"Completed\"}}"
    
  2. If the reconciliation does not start, you can patch the CR with the following command:
    oc patch <service> <CR name> --type merge --patch='{"spec": {"myflag": "abcd"}}'
    

Deployment creation failure for tracked prompt template that references a tuned or custom foundation model

Applies to: 5.1

What's happening

When you create a prompt template for a tuned or custom model, track the prompt template in an AI use case, then promote the prompt template to a deployment space and create a deployment for the tracked prompt template, you might see the following error:

An error occurred patching the prompt template reference.

This error can occur when the associated model changed or was deleted.

How to fix it

In the factsheet, click Automatically reconfigure. The reconfiguration refreshes the association so that you can proceed with creating a deployment and capturing the governance facts.

Adding users to an AI use case does not allow for search in groups

Applies to: 5.1

When you add users as members to an AI Use Case asset, you cannot search for users that belong to groups. You must add users individually.

Metrics computed for a prompt template on payload and feedback data are not synced completely to Governance console

Applies to: 5.1

If you are tracking a prompt template in an AI use case synced with the Governance console, metrics for an evaluation in a production space that uses both feedback and payload data do not sync the metric info computed on payload data to Governance console. The following steps illustrate the problem.

  1. Track a prompt template asset to a use case synced with Governance console.
  2. Promote the prompt template to a production deployment space.
  3. Create a new deployment for the prompt template.
  4. Evaluate the prompt template, using both feedback and payload data. For example, evaluate the output for the Fleisch readability score.
  5. Review the factsheet for the results of the evaluation. You will see metric values for both payload and feedback data.
  6. On the Governance console, the metrics show a value for the feedback data only. No result for the payload data displays.

Deleted space is not synced to the primary governance cluster

Applies to: 5.1.0

If you are managing governance activities across multiple clusters with a custom connection, deleting a space does not sync deletion metadata to the primary governance cluster. To make sure model data is deleted, you must delete all the models in the space before you delete the space.

Limitations

Attachments retricted by size

The file size of an attachment cannot exceed 210 MB.

Short text responses generate lower answer relevance scores

When your LLM model generates responses for retrieval augmented generation (RAG) tasks with short or single word answers to prompts, your prompt template evaluation might calculate answer relevance metric scores with lower values.

Scan files for malicious content

Files you upload are not automatically checked for malicious content. Before you upload a file, run a static scan against the file to ensure it does not contain malicious content.

Model information duplicated with multiple clusters with evaluation

If you import a model to a cluster that was configured for multiple clusters, the model entry will duplicate each time you evaluate it. This issue does not apply to models created on a primary governance cluster.

Resource limitation on number of service providers

You can not create more than 150 service providers in watsonx.governance due to a resource limitation. If you attempt to create more than 150 service providers, the following error message appears:

 ""Failed to create service provider. Status code: 403, Error: {"errors":[{"code":"AIQCS0026E","message":"Quota exceeded on resource: service_provider","parameters":["service_provider"]}],"trace":"config-MmUwMjQ3M2MtM2QxNS00M2U5LTg5NzAtNzE3ZjEyNTcyZDgx"}

This error can also occur if you attempt to run evaluations in more than 150 projects and spaces in watsonx.governance. You can fix this error by removing the unused projects, spaces, or service providers.

Evaluation tab is restricted to specific platforms for different asset types

When you add assets in watsonx.governance deployment spaces or projects, the Evaluation tab might not be available if you don't specify the correct platform for your asset type. You can view the Evaluations tab for generative AI assets only on the watsonx platform. You can view the tab for machine learning models only on the Cloud Pak for Data or watsonx platforms.

Parent topic: Service issues