Known limitations

Before you use IBM Cloud Pak® for Business Automation, make sure that you are aware of the known limitations.

For the most up-to-date information, see the support page Cloud Pak for Business Automation Known Limitations, which is regularly updated.

Important: You cannot alter the Cloud Pak for Business Automation container images in any way. The restriction applies to all the Cloud Pak images, and its dependencies. The images are licensed property of IBM and cannot be modified or adapted, which includes copying an image as a base image to build a child image.

The following sections provide the known limitations at the time of release.

Upgrade limitations

Table 1. Limitations of upgrading Cloud Pak for Business Automation
Limitation Description
During an upgrade from 21.0.3-IF035, 21.0.3-IF036, 21.0.3-IF037, 21.0.3-IF038 or 21.0.3-IF039 to 24.0.0-IF004 the Business Automation Studio pods fails to start, if your Cloud Pak for Business Automation includes any of the following capabilities:
  • Automation Decision Services
  • Automation Document Processing Development
  • Automation Workstream Services
  • Business Automation Workflow Runtime
  • Business Automation Workflow Authoring
Problem
When you are upgrading from 21.0.3-IF035, 21.0.3-IF036, 21.0.3-IF037, 21.0.3-IF038 or 21.0.3-IF039 to 24.0.0-IF004, the script shows the following message:
There is a known issue with the following patterns: ADS, ADP Development, BAA, BAW Authoring, BAW Runtime in CP4BA ${cp4a_operator_csv_version} when upgrading to CP4BA ${CP4BA_CSV_VERSION}.  Please refer to the technote https://www.ibm.com/mysupport/aCIKe000000CkmPOAS to check and perform the necessary steps before you can upgrade to CP4BA ${CP4BA_CSV_VERSION}"
"Select 'Yes' to continue with the upgrade if you have checked and confirmed that the database schema is in the correct state.  (Yes/No) (Default: No):

Where ${cp4a_operator_csv_version} is the current installed version and ${CP4BA_CSV_VERSION} is 24.0.0-IF004

Workaround
For more information, see technote.
During an upgrade from 21.0.3-IF031 cluster-scoped instance of Cloud Pak foundational services to 24.0.0-IF001 namespace-scoped instance of Cloud Pak foundational services, the ibm-cp4a-wfps-operator-v21.3-ibm-cp4a-operator-catalog-openshift-marketplace subscription fails to upgrade to the new version.
Problem
When you run the ./cp4a-deployment.sh -m upgradeOperator -n <project_name> script, the wfps operator (ibm-cp4a-wfps-operator) upgrade fails with the following error:
constraints not satisfiable: no operators found with name ibm-cp4a-wfps-operator.v21.3.31 in channel v24.0 of package ibm-cp4a-wfps-operator in the catalog referenced by subscription ibm-cp4a-wfps-operator-v21.3-ibm-cp4a-operator-catalog-openshift-marketplace, subscription ibm-cp4a-wfps-operator-v21.3-ibm-cp4a-operator-catalog-openshift-marketplace exists.
Workaround
Delete the ibm-cp4a-wfps-operator and install the 24.0.0-IF001 wfps (ibm-cp4a-wfps-operator) operator directly.
During an upgrade from 22.0.2-IF006 to 24.0.0-IF001, the Identity Management (IM) service can get stuck when the upgrade of Cloud Pak foundational services is cluster-scoped to namespace-scoped.
Problem
Check the status of the ibm-iam-request operand request by running the following command.
oc get operandrequest

If you see that ibm-iam-request failed, then use the steps in the workaround to resolve the issue.

Workaround
Go to the <namespace> of the CP4BA deployment that you are upgrading.
export NAMESPACE = <namespace>
oc project $NAMESPACE

Scale down the Business Teams Service (BTS) operator to 0.

oc scale deployment ibm-bts-operator-controller-manager -n $NAMESPACE --replicas=0

Delete the Postgres batch job by running the following command.

oc delete job create-postgres-license-config -n $NAMESPACE

Delete the ODLM operand pod by running the following command.

oc delete pod -l name=operand-deployment-lifecycle-manager -n $NAMESPACE

Check that the Postgres job is restarted by running the following command.

oc get pod -l job-name=create-postgres-license-config -n $NAMESPACE

Check the status of the Cloud Pak foundational services databases by running the following command.

oc get cluster -n $NAMESPACE

Scale the Business Teams Service (BTS) operator back up to 1.

oc scale deployment ibm-bts-operator-controller-manager -n $NAMESPACE --replicas=1
During an upgrade from 22.0.2-IF006 to 24.0.0-IF001, the Zen Service might get stuck in a pending status.
Problem
The Zen PVCs can get stuck during an upgrade. You can get the status by running the following command.
oc get pvc | grep Pending
common-service-db-1                          Pending                                                                                       96m
common-service-db-1-wal                      Pending 
Workaround
Delete the EDB Postgres CR that is used by the Cloud Pak foundational services, and then restart the ODLM by running the following commands.
oc delete clusters.postgresql.k8s.enterprisedb.io common-service-db -n <cp4ba-deployment-namespace>
oc delete pod -l name=operand-deployment-lifecycle-manager -n <cp4ba-deployment-namespace>
IBM Automation Document Processing projects cannot be upgraded directly from 21.0.3 to 24.0.0 due to issues with the MongoDB schema and data migration. In 24.0.0, IBM Automation Document Processing uses a different way to store or retrieve Git credentials that are used to access Git providers (GitHub, Git Lab). Projects that are created in 21.0.3 cannot be opened in 24.0.0 as IBM Automation Document Processing fails to connect to the Git provider to retrieve project information. New projects that are created in 24.0.0 are not impacted as the Git credentials are generated when you create a new project in 24.0.0.
The Business Teams Service pods do not start after you upgrade from 21.0.3 to 24.0.0 After you upgrade from 21.0.3 to 24.0.0, the Business Teams Service pods do not start.
Problem
  • Sometimes, if the Business Teams Service version 3.24 or earlier is installed in several namespaces in parallel, the Business Teams Service pods do not start.
  • When you run the cp4a-deployment.sh script in the upgradeOperator mode to upgrade your Cloud Pak for Business Automation from cluster-scoped to namespace-scoped, the script scales down the ibm-bts-controller-manager operator in the ibm-common-services namespace.
Workaround
  • Install the latest interim fix for CP4BA 21.0.3. The Business Teams Service version must be 3.33.1 before you upgrade to 24.0.0.
  • If you have multiple instances of Cloud Pak for Business Automation that are not yet upgraded to 24.0.0 on the cluster, then run the following command to manually start the ibm-bts-controller-manager in the ibm-common-services namespace after you complete the current upgrade.
    oc scale --replica=1 deployment ibm-bts-controller-manager -n ibm-common-services
When you upgrade to 24.0.0, BTS might fail with an intermittent issue.

Cause:

During the upgrade, sometimes the <meta-name>-pg-client-cert-secret is created by the CP4BA operator with an invalid character in the clientkey.pk8 key.

The following errors can be seen in the BTS pod logs:

Exception was raised during database initialization: 
java.sql.SQLException: Could not read SSL key file /opt/ibm/wlp/usr/shared/resources/security/db/clientkey.pk8. 
DSRA0010E: SQL State = 08006, Error Code = 0 
Caused by: java.io.EOFException: not enough content at java.base/sun.security.util.DerValue.<init>(Unknown Source) 
                                         at java.base/sun.security.util.DerValue.wrap(Unknown Source) 
                                         at java.base/sun.security.util.DerValue.wrap(Unknown Source) 
                                         at java.base/javax.crypto.EncryptedPrivateKeyInfo.<init>(Unknown Source) 
                                         at org.postgresql.ssl.LazyKeyManager.getPrivateKey(LazyKeyManager.java:236)... 
                                         48 more", "status": "waitingToBeUp"}, "status": "down"

Solution:

  1. Log in to the OpenShift CLI as the administrator.
  2. Get the secret.
    oc get secret | grep pg-client-cert-secret
  3. Delete the secret.
    oc delete secret icp4adeploy-pg-client-cert-secret
  4. You can wait for the operator to reconcile on its own (30-60 minutes) or force a restart by deleting the pod.

    oc get pod | grep cp4a-operator | grep -v catalog
    oc delete pod ibm-cp4a-operator-xxxxx-xxxxx

Installation limitations

Table 2. Limitations of installing Cloud Pak for Business Automation
Limitation Description
When you install 24.0.0, BTS might fail with an intermittent issue.

Cause:

During the installation, sometimes the <meta-name>-pg-client-cert-secret is created by the CP4BA operator with an invalid character in the clientkey.pk8 key.

The following errors can be seen in the BTS pod logs:

Exception was raised during database initialization: 
java.sql.SQLException: Could not read SSL key file /opt/ibm/wlp/usr/shared/resources/security/db/clientkey.pk8. 
DSRA0010E: SQL State = 08006, Error Code = 0 
Caused by: java.io.EOFException: not enough content at java.base/sun.security.util.DerValue.<init>(Unknown Source) 
                                         at java.base/sun.security.util.DerValue.wrap(Unknown Source) 
                                         at java.base/sun.security.util.DerValue.wrap(Unknown Source) 
                                         at java.base/javax.crypto.EncryptedPrivateKeyInfo.<init>(Unknown Source) 
                                         at org.postgresql.ssl.LazyKeyManager.getPrivateKey(LazyKeyManager.java:236)... 
                                         48 more", "status": "waitingToBeUp"}, "status": "down"

Solution:

  1. Log in to the OpenShift CLI as the administrator.
  2. Get the secret.
    oc get secret | grep pg-client-cert-secret
  3. Delete the secret.
    oc delete secret icp4adeploy-pg-client-cert-secret
  4. You can wait for the operator to reconcile on its own (30-60 minutes) or force a restart by deleting the pod.

    oc get pod | grep cp4a-operator | grep -v catalog
    oc delete pod ibm-cp4a-operator-xxxxx-xxxxx
Operator shows OOMKilled or CrashLoopBackOff error and cannot be up and running. Usually, the operator error results from a resource limitation and a pod status can show OOMKilled. But for some operators, the OOMKilled status can change to CrashLoopBackOff after a pod restarts. If you see the CrashLoopBackOff error, follow the information under "Operator pod in OOMKilled status" of the Troubleshooting topic to patch the operator CSV and give it more resources.
You cannot run the installation scripts in silent mode and separate CP4BA operators and CP4BA deployments in different namespaces. An environment variable does not exist to define two separate project names. For more information, see Environment variables for silent mode installation.
FIPS does not support clusters with a Linux on Z (s390x) architecture. The enablement of Cloud Pak for Business Automation containers for FIPS is only supported on an Red Hat OpenShift or ROKS cluster based on the amd64_x86 platform.
When OpenShift Container Platform is configured to allocate huge pages to your CP4BA deployments, the Zen Service from Cloud Pak foundational services does not use the huge pages configuration. Identity Management (IM) and its dependency Cloud Native PostgreSQL support huge pages. The Zen Service ignores the huge pages configuration and continues to use the standard memory allocation provided by the Red Hat OpenShift cluster. For more information, see Huge pages settings.
You cannot separate CP4BA operators from CP4BA deployments when Cloud Pak foundational services is already installed in the cluster. Not supported.
You cannot choose to separate CP4BA operators and CP4BA production deployments by using the OpenShift Container Platform console. Not supported.
Oracle 21c does not support Transport Layer Security (TLS) 1.3. If you plan to use TLS 1.3, choose a different database to Oracle 21c.
You cannot use Git Lab container registry as the private image registry to store all Cloud Pak for Business Automation images in an air gap environment. Git Lab container registry supports a two-level deep naming convention for container images. Cloud Pak for Business Automation images use more than two-level deep path structure and causes the mirroring process to fail with authentication or permission errors.

Backup and restore limitations

Table 3. Limitations of backing up and restoring Cloud Pak for Business Automation
Limitation Description
The Zen Service prevents you from restoring Cloud Pak foundational services from a backup. The restore procedure generates the following type of errors:
error getting resource from lister for rolebindings.authorization.openshift.io, zen5-backup-rolebinding: rolebindings.authorization.openshift.io "zen5-backup-rolebinding" not found
error getting resource from lister for roles.authorization.openshift.io, zen5-backup-role: roles.authorization.openshift.io "zen5-backup-role" not found

The issue is already fixed in Cloud Pak foundational services 4.6.13, which is planned to be included in the IF006 interim fix.

The Zen Service restore script from Cloud Pak foundational services has issues, and might exit and restart without ever completing. The issue is known and the fix is planned to be released in an upcoming version of Cloud Pak foundational services, which Cloud Pak for Business Automation can then include in a future interim fix.

Image tags cannot start with a zero

If you use image tags in the custom resources to identify different versions of Cloud Pak for Business Automation container images, do not start the tag with a "0" (zero). The "0" is removed from the tag by the operators and the image cannot be pulled as a result. Image tags can include lowercase and uppercase letters, digits, underscores ( _ ), periods ( . ), and dashes ( - ). For more information, see Digests versus image tags.

Okta and Microsoft Entra ID (formerly Azure Active Directory or Azure AD) integration

If you are using IBM Automation Document Processing (ADP) or IBM Business Automation Workflow, be sure to review the following known limitations.

Table 4. Limitations when not using LDAP for SCIM
Capability Limitation
Automation Document Processing ADP is not supported.
Business Automation Workflow
  • Case Management features and applications are not supported.
  • Processes cannot be triggered if documents are added from the Administration Console for Content Platform Engine (ACCE). However, processes can be started when documents are added from Navigator or IBM Business Automation Workflow desktop.
  • External Services for REST Services and Web Services cannot be created.
  • External Workflow cannot be created.
  • Automation Services are unable to consume applications published through Open API.

Multi-zone region (MZR) storage support on Red Hat OpenShift Kubernetes Service (ROKS)

Table 5. Limitations of Portworx on ROKS
Limitation Description
When one zone is unavailable, it might take up to a minute to recover. If all worker nodes in a single zone are shut down or unavailable, it can take up to a minute to access the Cloud Pak applications and services. For example, to access ACCE from CPE takes a minute to respond.

Connection issues of Identity Management (IM) to LDAPs

Table 6. Limitations of Identity Management (IM) foundational service
Limitation Description
IM does not update LDAP certificates automatically When the CP4BA operator configures an LDAP connection to IM, the certificates are added to the platform-auth-ldaps-ca-cert secret.
Problem

If the LDAP certificates are changed or expire, IM cannot refresh the certificates automatically, and can result in SSL errors in the platform-auth-service-** pod.

CWPKI0823E: SSL HANDSHAKE FAILURE: The signer might need to be added to local trust store 
[/opt/ibm/wlp/output/defaultServer/resources/security/key.jks], located in SSL configuration alias [defaultSSLConfig]. 
The extended error message from the SSL handshake exception is: [PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target].
Workaround

Manually import the new or changed LDAP certificates to IM. For more information, see Configuring LDAP over SSL.

LDAP failover

LDAP failover is not supported. It is not possible to configure multiple/failover LDAPs in the custom resource (CR) file template.

LDAP configuration cannot use IPv6 addresses

If the LDAP server is using IPv6, use the hostname of the LDAP server in the configuration of a Cloud Pak for Business Automation deployment instead of an IPv6 address. An error from the Identity Management service reports an "invalid ldap url" when an IPv6 address is used, even when the URL is valid.

IBM Automation Document Processing

Table 7. Limitations of Document Processing
Limitation Description
From 24.0.0-IF006 Document Processing does not support the IBM Security Directory Server LDAP type.
Problem:
If you configure an IBM Security Directory Server (SDS/TDS) LDAP with a CP4BA deployment, users cannot be configured properly to have sufficient permissions for Document Processing functionality.
Impact:
If your LDAP type is IBM Security Directory Server, you cannot use Document Processing.
Action:
The issue is likely to be resolved in an upcoming interim fix, so regularly review the What's new in 24.0.0 for this update.
A warning message might appear when uploading DOC or DOCX files.
Problem:
When uploading a DOC or DOCX file in ADP, the warning message might appear:

javaldx: Could not find a Java™ Runtime Environment!

Warning: failed to read the path from javaldx

Impact:
This warning does not affect the function of the system or the processing of the uploaded document. Users can safely ignore this warning because it does not impact the upload or document processing workflow. The document is still processed as expected without any issues.
Action:
No additional action is required. If you experience any unexpected behavior beyond this warning, contact the support team for further assistance.
Issues with versioning when you import a project in Document Processing Designer with the merge or overwrite options.

Importing a project with the overwrite option in Document Processing Designer only supports importing projects from the previous version.

If you exported projects from older releases, you must periodically update the archive files with the latest release:
  1. Import your archive from an older release into the latest release, with the overwrite options.
  2. Re-export the project as a new archive file. For example, if you import an archive file into 24.0.0, that archive file must be exported from 23.0.2. If you initially exported from 23.0.1, then you must first import it to 23.0.2 with the overwrite option, then export it, then import that 23.0.2 archive file into 24.0.0.
It is recommended not to merge projects from different releases. When you import a project with the merge option across releases, you must first migrate the older release archive file into the latest release:
  1. From the latest release, import your older version archive file with the overwrite option. All document classes are imported and your project contains all new system types from the latest release.
  2. After you successfully import, export the project again.
  3. Import the archive file that you created into your latest release project.

When you import a project with the merge option, the current project is merged with the project from the exported archive file. Only non-conflicting document types are imported into the project, and if there are conflicting document types, they are skipped during import. If you need to import a document type that is in conflict, you must first delete that document type from the project before you import the archive file.

After you import a project, it is recommended to retrain both classification and extraction models to avoid issues with the change of document classes in the merged project, and make sure that the models are trained and build with the latest features.
Import projects in ADP Designer
Need an egress to use webhooks with external custom applications. If you have an external custom application that uses the webhook feature, you must set up a custom egress for Document Processing engine so that notifications can be sent outside of the Red Hat OpenShift Container Platform cluster where Document Processing engine is deployed. For more information, see Creating an external egress for Document Processing engine when an external application uses webhooks.
In a starter deployment, simultaneous uploading of multiple large batches might fail with an Uploading Error status.
Workaround

You must view the batch contents, remove the failed documents, and upload these documents again.

In some situations, the documents continue to be processed by the server and might transition out of the Uploading Error status. When a batch has the Uploading Error status, but none of the documents are in error, you can recover the batch by either adding or removing a document in the batch. The batch status is updated and processing can resume.

Accessibility of the Verify client graphical user interface.

A user who uses the Firefox browser to reach the Verify client user interface and then tabs into it to access the various zones cannot tab out again.

You reach the Verify client user interface in different ways, depending on your application. For example, for the single document application, you tab into the content list on the start page, select a document from the list by pressing the Tab or Arrow keys, and then you tab to the context icon (the 3 ...), select the icon by pressing the space bar or the Enter key, and finally press the Arrow keys to select Review document.

Data standardization: uniqueness You cannot reuse an existing data definition for composite fields, nor create a data definition with a name that is already used for another data definition.

When standardizing your data, you can associate a data definition with a field or a document type. These data definitions are used when deploying the project to a runtime environment. For simple fields, you can either create a data definition, or reuse an existing one. For composite fields, you are not able to reuse an existing data definition, you can only create one.

If you attempt to create a data definition with the same name as an existing one, you get a uniqueness error.

Deleting and re-creating a project
If you want to delete and re-create a Document Processing project to start over, you might encounter errors after re-creating the project. This occurs because the re-created project is out of sync with the Git repository.
Workaround

To avoid errors, follow these steps to delete and re-create a project:

  1. In Business Automation Studio, delete your project.
  2. Go to the remote Git server that is connected to your Document Processing Designer and delete the project repository for the project that you deleted in step 1.
  3. In Business Automation Studio, create your project again with the same name as the previous project.

For more information, see Saving an ADP Project fails with status code 500 service error.

Fit to height option does not fit to height properly. In a single or batch document processing application, the Fit to height option does not fit to height properly. When you view a document in a single or batch document processing application, if you rotate this document clockwise or counterclockwise and select the Fit to height option, the size is not changed. The limitation applies when fixing classification or data extraction issues, and in portrait view in the modal viewer dialog.
The ViewONE service is not load-balanced. Due to a limitation when editing the document to fix classification issues in batch document processing applications, the current icp4adeploy-viewone-svc service session affinity is configured as ClientIP and the session has to be sticky.
Postal mail address and Address information field types
  • Extraction of multiple addresses within a single block is not supported. You cannot define your own composite address field types because you cannot define multiple postal mail address subfields for a single address block.
  • You cannot upgrade previous address field types (such as address block or US mail address) to the new Postal mail address and Address information field types. Address field types that you defined in earlier versions remain the same.
    Workaround
    If you want to use the new address functionality, you must deploy a new key class based on the new address field types (Postal mail address and Address information).
Microsoft Word documents
  • In a FIPS-enabled deployment, you cannot upload or process Microsoft Word documents that have the .doc or .docx format.
  • If you observe that the viewer hangs when a multipage Microsoft Word document is loaded to the classification or data extraction issue page, look up the log of your browser development tool console. An error message that is similar to the following one means that the connection between the viewone service and the Application Engine has expired.

    Strict-Transport-Security: The connection to the site is untrustworthy, so the specified header was ignored. v1files x1~1> JS Thread 6 Jun 2022, 08:41:52, UTC-7 (000005615/000002156): finishLoading viewone_ajax-0.js:4856:26 java.lang.ArrayIndexOutOfBoundsException: Invalid page number -1 viewone_ajax-0.js:4856:26

    Workaround
    Refresh the page and reload the document.
  • Data highlighting and Click N Key coordinates for DOC and DOCX documents might not capture highlighted text correctly when fixing data extraction issues in the runtime application.
    Workaround
    Use the keyboard to correct any values that encounter this issue.
Support of NVIDIA CUDA drivers 11.2 To use a FIPS-compliant TensorFlow version, the NVIDIA CUDA drivers 11.2 are required. However, IBM Cloud® Public (ROKS) GPU does not support CUDA 11.2 because it uses Red Hat® Enterprise Linux® (RHEL) 7. The current version of NVIDIA Operator on RHEL 7 is 1.5.2. It cannot be upgraded to the latest version (1.6.x) to use the latest NVIDIA CUDA drivers 11.2, because that version does not support RHEL 7. The current NVIDIA Operator that is installed from the Operator Hub does not run on GPU RHEL 7 Bare Metal Servers, as you only have the option to deploy to 1.6.x.
Data extraction from tables is not fully supported.
  • While data extraction from simple tables is fully supported, some limitations exist for complex tables, for example when you extract data from the summary section of tables, or if some watermarks, text, or other interfering elements exist in your documents. Complex tables are not fully supported. For the full list of supported tables, and examples of the types of tables that contain limitations, see Best practices on table extraction.
Some checkboxes are not detected. Some types of checkboxes are not detected, for example if they are too small or improperly shaped. For more information and examples of non-detected checkboxes, see Limitations for checkbox detection.
Problems accessing the application configurator after the initial configuration
Problem

When you create an application in Business Automation Studio, an application configurator displays where you enter configuration parameters for the application.

However, after the application is created, you cannot access the same configurator. As a result, you cannot change the configuration settings the same way.

Workaround
To reconfigure your settings after the application is created:
  1. In Business Automation Studio, open your application.
  2. Select Application project settings from the top left drop-down list.
  3. Click the Environment variables tab.
  4. Edit the following environment variables where applicable:
    • evObjectStoreName (the CPE current object store)
    • evDbaProjectId (the CPE's currently deployed project ID)
    • evRootFolder (the CPE folder, only for the Document Processing template)
  5. Click Finish editing to save changes. Click Preview.
  6. Export and import your application to the Application Engine.
SystemT Extractor accuracy The SystemT extractors that are included with Automation Document Processing are samples that are intended to demonstrate the capabilities of using this feature. They are not tuned to any specific document format and might not provide high recognition rates for some document types. Customers who want to use SystemT extractors in production should build their own SystemT extractors, which can be better trained for documents of the type the customer processes.

IBM Automation Decision Services

For more information, see Known limitations.

IBM Operational Decision Manager

For information about Operational Decision Manager limitations, see Operational Decision Manager Known Limitations.

IBM Automation Workstream Services

For more information, see Workstream limitations.

IBM Business Automation Insights

Table 8. Limitations to IBM Business Automation Insights
Limitation Description
Alerts
Business Performance Center
You cannot create an alert for a period KPI if it contains a group. If you want to create an alert for a period KPI, go to the Monitoring tab and remove the Group by keyword. Then, go to the Thresholds tab to create one or more alerts.
Business Performance Center
Aggregations
Because Business Performance Center uses OpenSearch as its database, approximation problems can be found with aggregations of numbers greater than 2^53 (that is, about 9 * 10^15). See the Limitations section of the Aggregations page of the OpenSearch documentation.
Fixed date time range
If, in a fixed date time range, you modify only the date or time, your changes are not saved.
Data tables
When any chart is displayed as a data table, only the first 1000 rows are shown.
Data permissions
You can set up data permissions by monitoring source or by team. If you encounter an error when you set a permission by source, try setting the same permission by team.
Chart window title
If the chart window is too small to display the full title, an ellipsis (…) replaces the end of the title. To view the full title, expand the chart window or click the Fullscreen button.
No Business Automation Insights support for IBM Automation Document Processing The integration between IBM Automation Document Processing (ADP) and Business Automation Insights is not supported. When you deploy or configure the IBM Cloud Pak for Business Automation platform, select the Business Automation Insights component together with patterns that are supported by Business Automation Insights, such as workflow (Business Automation Workflow) or decisions (Operational Decision Manager), not just with document-processing (IBM Automation Document Processing).
Flink jobs might fail to resume after a crash. After a Flink job failure or a machine restart, the Flink cluster might not be able to restart the Flink job automatically. For a successful recovery, restart Business Automation Insights. For instructions, see Troubleshooting Flink jobs.
Elasticsearch indices

Defining a high number of fields in an OpenSearch index might lead to a so-called mappings explosion which might cause out-of-memory errors and difficult situations to recover from. The maximum number of fields in OpenSearch indices created by IBM Business Automation Insights is set to 1000. Field and object mappings, and field aliases, count towards this limit. Ensure that the various documents that are stored in OpenSearch indices do not lead to reaching this limit.

Event formats are documented in Reference for event emission.

For Operational Decision Manager, you can configure event processing to avoid the risk of mappings explosion. See Operational Decision Manager event processing walkthrough.

In the BPEL Tasks dashboard, the User tasks currently not completed widget does not display any results. The search that is used by the widget does not return any results because it uses an incorrect filter for the task state.

To avoid this issue, edit the filter in the User tasks currently waiting to be processed search. Set the state filter to accept one of the following values: TASK_CREATED, TASK_STARTED, TASK_CLAIM_CANCELLED, TASK_CLAIMED.

Historical Data Playback REST API The API plays back data only from closed processes (completed or terminated). Active processes are not handled.
In Case dashboards, elapsed time calculations do not include late events: Average elapsed Time of completed activities and Average elapsed time of completed cases widgets. Events that are emitted after a case or activity completes are ignored. But by setting the bai_configuration.icm.process_events_after_completion parameter to true, you can set the Case Flink job to process the events that are generated on a case after the case is closed. The start and end times remain unchanged. Therefore, the duration is the same but the properties are updated based on the events that were generated after completion.
Business Automation Insights deployment fails when Apache Kafka is installed in "All namespaces" and Business Automation Insights is deployed in a dedicated namespace. Kafka monitors Custom Resources (CRs) across all namespaces, including the Business Automation Insights namespace. This causes conflicts when resources such as KafkaUsers or KafkaTopics share the same name across deployments. API requests that rely on short names become ambiguous or fail when both instances are visible from the same namespace. To solve this issue, deploy Apache Kafka in a namespace-scoped mode that does not include the Business Automation Insights namespace.

IBM Business Automation Navigator

Table 9. Limitations of IBM Business Automation Navigator
Limitation Description
Resiliency issues can cause lapses in the availability of Workplace after a few weeks. This issue might be attributed to issues with the Content Platform Engine (cpe) pod. Use the following mitigation steps:
  • Ensure that cpe is deployed in a highly available setup, with at least two replicas.
  • Monitor the cpe pod and restart if issues occur.
Task Manager is not supported when configuring with System for Cross-domain Identity Management (SCIM). Task Manager requires an LDAP registry for user authorization. It is not supported in a deployment that is configured with SCIM.
After you update the schema name in the CR YAML, the schema name is not updated in the system.properties file and the Business Automation Navigator pod uses the old schema name. You need to manually delete the system.properties file and restart the Business Automation Navigator pod so that it uses the new schema name.

IBM Business Automation Application and IBM Business Automation Studio

Table 10. Limitations of Business Automation Application and Business Automation Studio
Limitation Description
Installing the Application pattern with an Oracle database fails if you use Oracle Database 23c Free – Developer Release using Transport Layer Security (TLS) 1.3 The Application Engine and the Application Engine playback server do not support Oracle Database 23c Free – Developer Release. The following exception is thrown:
ORA-28860: Fatal SSL error
Process applications from Business Automation Workflow do not appear in Application Designer. Sometimes the app resources of the Workflow server do not appear in Studio when you deploy Workflow server instances, Studio, and Resource Registry in the same custom resource YAML file.

If you deployed Business Automation Studio with the Business Automation Workflow server in the same custom resource YAML file, and you do not see process applications from Business Automation Workflow server in Business Automation Studio, restart the Business Automation Workflow server pod.

The Business Automation Workflow toolkit and configurators might not get imported properly. When you install both Business Automation Workflow on containers and Business Automation Studio together, the Business Automation Workflow toolkit and configurators might not get imported properly. If you don't see the Workflow Services toolkit, the Start Process Configurator, or the Call Service Configurator, manually import the .twx files by downloading them from the Contributions table inside the Resource Registry section of the Administration page of Business Automation Studio.
Kubernetes kubectl known issue modified subpath configmap mount fails when container restarts #68211. Business Automation Studio related pods go into a CrashLoopBackOff state during the restart of the docker service on a worker node.

If you use the kubectl get pods command to check the pods when a pod is in the CrashLoopBackOff state, you get the following error message:

Warning Failed 3m kubelet, <IP_ADDRESS> Error: failed to start container: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \" rootfs_linux.go:58: mounting

To recover a pod, delete it in the OpenShift® console and create a new pod.

To use Application Engine with Db2® for High Availability and Disaster Recovery (HADR), you must have an alternative server available when Application Engine starts.

Application Engine depends on the automatic client reroute (ACR) of the Db2 HADR server to fail over to a standby database server. You must have a successful initial connection to that server when Application Engine starts.

IBM Resource Registry can get out of sync.

If you have more than one etcd server and the data gets out of sync between the servers, you must scale to one node and then scale back to multiple nodes to synchronize Resource Registry.

After you create the Resource Registry, you must keep the replica size.

Because of the design of etcd, changing the replica size can cause data loss. If you must set the replica size, set it to an odd number. If you reduce the pod size, the pods are deleted one by one to prevent data loss and the possibility that the cluster gets out of sync.
  • If you update the Resource Registry admin secret to change the username or password, first delete the instance_name-dba-rr-random_value pods to cause Resource Registry to enable the updates. Alternatively, you can enable the update manually with etcd commands.
  • If you update the Resource Registry configurations in the icp4acluster custom resource instance, the update might not affect the Resource Registry pod directly. It affects the newly created pods when you increase the number of replicas.

After you deploy Business Automation Studio or Application Engine, you cannot change the Business Automation Studio or Application Engine admin user.

Make sure that you set the admin user to a sustainable username at installation time.

Because of a Node.js server limitation, Application Engine trusts only root CA.

If an external service is used and signed with another root CA, you must add the root CA as trusted instead of the service certificate.
  • The certificate can be self-signed, or signed by a well-known root CA.
  • If you are using a depth zero self-signed certificate, it must be listed as a trusted certificate.
  • If you are using a certificate that is signed by a self-signed root CA, the self-signed CA must be in the trusted list. Using a leaf certificate in the trusted list is not supported.
  • If you are adding the root CA of two or more external services to the Application Engine trust list, you can't use the same common name for those root CAs.

IBM FileNet® Content Manager

Table 11. Limitations of IBM FileNet Content Manager
Limitation Description
A smaller number of indexing batches than configured leads to a noticeable degradation in the overall indexing throughput rate. Because obsolete Virtual Servers in the Global Configuration Database (GCD) are not automatically cleaned up, that Virtual Server count can be higher than the actual number of CE instances with dispatching enabled. That inflated number results in a smaller number of concurrent batches per CSS server, negatively affecting indexing performance.

For more information and to resolve the issue, see Content Platform Engine uneven CBR indexing workload and indexing degradation.

Download just Jace.jar from the Client API Download area of ACCE fails. Result is the "3.5.0.0 (20211028_0742) x86_64" text returned. When an application, like ACCE, is accessed through Zen, the application, and certain operator-controlled elements in Kubernetes need additional logic to support embedded or "self-referential" URLs. The single file download in the Client API Download area of ACCE uses self-referential URLs and the additional logic is missing. To avoid the self-referential URLs, download the whole Client API package that contains the desired file instead of an individual file. The individual file would then be extracted from the package.
Queries to retrieve group hierarchies using the SCIM Directory Provider might fail if one of the groups in the hierarchy contains a space or some other character not valid in an HTTP URL. The problem can occur when performing searches of users or groups and the search tries to retrieve the groups a group belongs to. If one of these groups in this chain contains a space or other illegal HTTP URL character, then the search may fail.
LDAP to SCIM attribute mapping may not be correct. The default LDAP to SCIM attribute mapping used by IM may not be correct. In particular, TDS/SDS LDAP may have incorrect mappings for the group attributes for objectClass and members. To learn more about how to review this mapping and change it, see Updating SCIM LDAP attributes mapping.
When using the SCIM Directory Provider to perform queries for a user or group with no search attribute, all users/groups are returned rather than no users or groups. Queries without a search pattern are being treated as a wildcard rather than a restriction to return nothing.

IBM Business Automation Workflow and IBM Workflow Process Service

For IBM Business Automation Workflow, see IBM Business Automation Workflow known limitations.
For Workplace, see Workplace limitations.