Known limitations
Before you use IBM Cloud Pak® for Business Automation, make sure that you are aware of the known limitations.
For the most up-to-date information, see the support page Cloud Pak for Business Automation Known Limitations, which is regularly updated.
The following sections provide the known limitations at the time of release.
- Upgrade limitations
- Installation limitations
- Backup and restore limitations
- Image tags cannot start with a zero
- Okta and Microsoft Entra ID (formerly Azure Active Directory or Azure AD) integration
- Multi-zone region (MZR) storage support on Red Hat OpenShift Kubernetes Service (ROKS)
- Connection issues of Identity Management (IM) to LDAPs
- LDAP failover
- LDAP configuration cannot use IPv6 addresses
- IBM Automation Document Processing
- IBM Automation Decision Services
- IBM Operational Decision Manager
- IBM Automation Workstream Services
- IBM Business Automation Insights
- IBM Business Automation Navigator
- IBM Business Automation Application and IBM Business Automation Studio
- IBM FileNet Content Manager
- IBM Business Automation Workflow and IBM Workflow Process Service
Upgrade limitations
| Limitation | Description |
|---|---|
During an upgrade from 21.0.3-IF035, 21.0.3-IF036, 21.0.3-IF037, 21.0.3-IF038 or 21.0.3-IF039
to 24.0.0-IF004
the Business Automation
Studio pods fails to
start, if your Cloud Pak for Business Automation includes any of the
following capabilities:
|
|
During an upgrade from 21.0.3-IF031 cluster-scoped instance of Cloud Pak foundational services to 24.0.0-IF001
namespace-scoped instance of Cloud Pak foundational services, the
ibm-cp4a-wfps-operator-v21.3-ibm-cp4a-operator-catalog-openshift-marketplace
subscription fails to upgrade to the new version. |
|
| During an upgrade from 22.0.2-IF006 to 24.0.0-IF001, the Identity Management (IM) service can get stuck when the upgrade of Cloud Pak foundational services is cluster-scoped to namespace-scoped. |
|
| During an upgrade from 22.0.2-IF006 to 24.0.0-IF001, the Zen Service might get stuck in a pending status. |
|
| IBM Automation Document Processing projects cannot be upgraded directly from 21.0.3 to 24.0.0 due to issues with the MongoDB schema and data migration. | In 24.0.0, IBM Automation Document Processing uses a different way to store or retrieve Git credentials that are used to access Git providers (GitHub, Git Lab). Projects that are created in 21.0.3 cannot be opened in 24.0.0 as IBM Automation Document Processing fails to connect to the Git provider to retrieve project information. New projects that are created in 24.0.0 are not impacted as the Git credentials are generated when you create a new project in 24.0.0. |
| The Business Teams Service pods do not start after you upgrade from 21.0.3 to 24.0.0 | After you upgrade from 21.0.3 to 24.0.0, the Business Teams Service pods do not start.
|
| When you upgrade to 24.0.0, BTS might fail with an intermittent issue. |
Cause: During the upgrade, sometimes the The following errors can be seen in the BTS pod logs: Exception was raised during database initialization:
java.sql.SQLException: Could not read SSL key file /opt/ibm/wlp/usr/shared/resources/security/db/clientkey.pk8.
DSRA0010E: SQL State = 08006, Error Code = 0
Caused by: java.io.EOFException: not enough content at java.base/sun.security.util.DerValue.<init>(Unknown Source)
at java.base/sun.security.util.DerValue.wrap(Unknown Source)
at java.base/sun.security.util.DerValue.wrap(Unknown Source)
at java.base/javax.crypto.EncryptedPrivateKeyInfo.<init>(Unknown Source)
at org.postgresql.ssl.LazyKeyManager.getPrivateKey(LazyKeyManager.java:236)...
48 more", "status": "waitingToBeUp"}, "status": "down"
Solution:
|
Installation limitations
| Limitation | Description |
|---|---|
| When you install 24.0.0, BTS might fail with an intermittent issue. |
Cause: During the installation, sometimes the The following errors can be seen in the BTS pod logs: Exception was raised during database initialization:
java.sql.SQLException: Could not read SSL key file /opt/ibm/wlp/usr/shared/resources/security/db/clientkey.pk8.
DSRA0010E: SQL State = 08006, Error Code = 0
Caused by: java.io.EOFException: not enough content at java.base/sun.security.util.DerValue.<init>(Unknown Source)
at java.base/sun.security.util.DerValue.wrap(Unknown Source)
at java.base/sun.security.util.DerValue.wrap(Unknown Source)
at java.base/javax.crypto.EncryptedPrivateKeyInfo.<init>(Unknown Source)
at org.postgresql.ssl.LazyKeyManager.getPrivateKey(LazyKeyManager.java:236)...
48 more", "status": "waitingToBeUp"}, "status": "down"
Solution:
|
Operator shows OOMKilled or CrashLoopBackOff error and
cannot be up and running. |
Usually, the operator error results from a resource limitation and a pod status can show
OOMKilled. But for some operators, the OOMKilled status can change to
CrashLoopBackOff after a pod restarts. If you see the CrashLoopBackOff error,
follow the information under "Operator pod in OOMKilled status" of the Troubleshooting topic to patch the operator CSV and give it more resources. |
| You cannot run the installation scripts in silent mode and separate CP4BA operators and CP4BA deployments in different namespaces. | An environment variable does not exist to define two separate project names. For more information, see Environment variables for silent mode installation. |
| FIPS does not support clusters with a Linux on Z (s390x) architecture. | The enablement of Cloud Pak for Business Automation containers for FIPS is only supported on an Red Hat OpenShift or ROKS cluster based on the amd64_x86 platform. |
| When OpenShift Container Platform is configured to allocate huge pages to your CP4BA deployments, the Zen Service from Cloud Pak foundational services does not use the huge pages configuration. | Identity Management (IM) and its dependency Cloud Native PostgreSQL support huge pages. The Zen Service ignores the huge pages configuration and continues to use the standard memory allocation provided by the Red Hat OpenShift cluster. For more information, see Huge pages settings. |
| You cannot separate CP4BA operators from CP4BA deployments when Cloud Pak foundational services is already installed in the cluster. | Not supported. |
| You cannot choose to separate CP4BA operators and CP4BA production deployments by using the OpenShift Container Platform console. | Not supported. |
| Oracle 21c does not support Transport Layer Security (TLS) 1.3. | If you plan to use TLS 1.3, choose a different database to Oracle 21c. |
| You cannot use Git Lab container registry as the private image registry to store all Cloud Pak for Business Automation images in an air gap environment. | Git Lab container registry supports a two-level deep naming convention for container images. Cloud Pak for Business Automation images use more than two-level deep path structure and causes the mirroring process to fail with authentication or permission errors. |
Backup and restore limitations
| Limitation | Description |
|---|---|
| The Zen Service prevents you from restoring Cloud Pak foundational services from a backup. | The restore procedure generates the following type of
errors:The issue is already fixed in Cloud Pak foundational services 4.6.13, which is planned to be included in the IF006 interim fix. |
| The Zen Service restore script from Cloud Pak foundational services has issues, and might exit and restart without ever completing. | The issue is known and the fix is planned to be released in an upcoming version of Cloud Pak foundational services, which Cloud Pak for Business Automation can then include in a future interim fix. |
Image tags cannot start with a zero
If you use image tags in the custom resources to identify different versions of Cloud Pak for Business Automation container images, do not start the tag with a "0" (zero). The "0" is removed from the tag by the operators and the image cannot be pulled as a result. Image tags can include lowercase and uppercase letters, digits, underscores ( _ ), periods ( . ), and dashes ( - ). For more information, see Digests versus image tags.
Okta and Microsoft Entra ID (formerly Azure Active Directory or Azure AD) integration
If you are using IBM Automation Document Processing (ADP) or IBM Business Automation Workflow, be sure to review the following known limitations.
| Capability | Limitation |
|---|---|
| Automation Document Processing | ADP is not supported. |
| Business Automation Workflow |
|
Multi-zone region (MZR) storage support on Red Hat OpenShift Kubernetes Service (ROKS)
| Limitation | Description |
|---|---|
| When one zone is unavailable, it might take up to a minute to recover. | If all worker nodes in a single zone are shut down or unavailable, it can take up to a minute to access the Cloud Pak applications and services. For example, to access ACCE from CPE takes a minute to respond. |
Connection issues of Identity Management (IM) to LDAPs
| Limitation | Description |
|---|---|
| IM does not update LDAP certificates automatically | When the CP4BA operator configures an LDAP connection to IM, the certificates are added to
the platform-auth-ldaps-ca-cert secret.
|
LDAP failover
LDAP failover is not supported. It is not possible to configure multiple/failover LDAPs in the custom resource (CR) file template.
LDAP configuration cannot use IPv6 addresses
If the LDAP server is using IPv6, use the hostname of the LDAP server in the configuration of a Cloud Pak for Business Automation deployment instead of an IPv6 address. An error from the Identity Management service reports an "invalid ldap url" when an IPv6 address is used, even when the URL is valid.
IBM Automation Document Processing
| Limitation | Description |
|---|---|
| From 24.0.0-IF006 Document Processing does not support the IBM Security Directory Server LDAP type. |
|
| A warning message might appear when uploading DOC or DOCX files. |
|
| Issues with versioning when you import a project in Document Processing Designer with the merge or overwrite options. |
Importing a project with the overwrite option in Document Processing Designer only supports importing projects from the previous version. If you exported projects from older releases, you must periodically update the archive files with
the latest release:
It is recommended not to merge projects from different releases. When you import a project with
the merge option across releases, you must first migrate the older release archive file into the
latest release:
When you import a project with the merge option, the current project is merged with the project from the exported archive file. Only non-conflicting document types are imported into the project, and if there are conflicting document types, they are skipped during import. If you need to import a document type that is in conflict, you must first delete that document type from the project before you import the archive file. After you import a project, it is recommended to retrain both classification and extraction
models to avoid issues with the change of document classes in the merged project, and make sure that
the models are trained and build with the latest features.
![]() |
| Need an egress to use webhooks with external custom applications. | If you have an external custom application that uses the webhook feature, you must set up a custom egress for Document Processing engine so that notifications can be sent outside of the Red Hat OpenShift Container Platform cluster where Document Processing engine is deployed. For more information, see Creating an external egress for Document Processing engine when an external application uses webhooks. |
| In a starter deployment, simultaneous uploading of multiple large batches might fail with an Uploading Error status. |
|
| Accessibility of the Verify client graphical user interface. | A user who uses the Firefox browser to reach the Verify client user
interface and then tabs into it to access the various zones cannot tab out again. You reach the Verify client user interface in different ways, depending on your application. For example, for the single document application, you tab into the content list on the start page, select a document from the list by pressing the Tab or Arrow keys, and then you tab to the context icon (the 3 ...), select the icon by pressing the space bar or the Enter key, and finally press the Arrow keys to select Review document. |
| Data standardization: uniqueness | You cannot reuse an existing data definition for composite fields, nor create a data
definition with a name that is already used for another data definition. When standardizing your data, you can associate a data definition with a field or a document type. These data definitions are used when deploying the project to a runtime environment. For simple fields, you can either create a data definition, or reuse an existing one. For composite fields, you are not able to reuse an existing data definition, you can only create one. If you attempt to create a data definition with the same name as an existing one, you get a uniqueness error. |
| Deleting and re-creating a project |
If you want to delete and re-create a Document Processing project to start over, you
might encounter errors after re-creating the project. This occurs because the re-created project is
out of sync with the Git repository.
For more information, see Saving an ADP Project fails with status code 500 service error. |
| Fit to height option does not fit to height properly. | In a single or batch document processing application, the Fit to height option does not fit to height properly. When you view a document in a single or batch document processing application, if you rotate this document clockwise or counterclockwise and select the Fit to height option, the size is not changed. The limitation applies when fixing classification or data extraction issues, and in portrait view in the modal viewer dialog. |
| The ViewONE service is not load-balanced. | Due to a limitation when editing the document to fix classification issues in batch document
processing applications, the current icp4adeploy-viewone-svc service session
affinity is configured as ClientIP and the session has to be sticky. |
| Postal mail address and Address information field types |
|
| Microsoft Word documents |
|
| Support of NVIDIA CUDA drivers 11.2 | To use a FIPS-compliant TensorFlow version, the NVIDIA CUDA drivers 11.2 are required. However, IBM Cloud® Public (ROKS) GPU does not support CUDA 11.2 because it uses Red Hat® Enterprise Linux® (RHEL) 7. The current version of NVIDIA Operator on RHEL 7 is 1.5.2. It cannot be upgraded to the latest version (1.6.x) to use the latest NVIDIA CUDA drivers 11.2, because that version does not support RHEL 7. The current NVIDIA Operator that is installed from the Operator Hub does not run on GPU RHEL 7 Bare Metal Servers, as you only have the option to deploy to 1.6.x. |
| Data extraction from tables is not fully supported. |
|
| Some checkboxes are not detected. | Some types of checkboxes are not detected, for example if they are too small or improperly shaped. For more information and examples of non-detected checkboxes, see Limitations for checkbox detection. |
| Problems accessing the application configurator after the initial configuration |
|
| SystemT Extractor accuracy | The SystemT extractors that are included with Automation Document Processing are samples that are intended to demonstrate the capabilities of using this feature. They are not tuned to any specific document format and might not provide high recognition rates for some document types. Customers who want to use SystemT extractors in production should build their own SystemT extractors, which can be better trained for documents of the type the customer processes. |
IBM Automation Decision Services
For more information, see Known limitations.
IBM Operational Decision Manager
For information about Operational Decision Manager limitations, see Operational Decision Manager Known Limitations.
IBM Automation Workstream Services
For more information, see Workstream limitations.
IBM Business Automation Insights
| Limitation | Description |
|---|---|
| Alerts |
|
| Business Performance Center |
|
| No Business Automation Insights support for IBM Automation Document Processing | The integration between IBM Automation Document Processing (ADP) and Business Automation Insights is not supported.
When you deploy or configure the IBM
Cloud Pak for Business Automation platform, select the
Business Automation Insights component
together with patterns that are supported by Business Automation Insights, such as
workflow (Business Automation Workflow) or
decisions (Operational Decision Manager), not just with
document-processing (IBM Automation Document Processing). |
| Flink jobs might fail to resume after a crash. | After a Flink job failure or a machine restart, the Flink cluster might not be able to restart the Flink job automatically. For a successful recovery, restart Business Automation Insights. For instructions, see Troubleshooting Flink jobs. |
| Elasticsearch indices |
Defining a high number of fields in an OpenSearch index might lead to a so-called mappings explosion which might cause out-of-memory errors and difficult situations to recover from. The maximum number of fields in OpenSearch indices created by IBM Business Automation Insights is set to 1000. Field and object mappings, and field aliases, count towards this limit. Ensure that the various documents that are stored in OpenSearch indices do not lead to reaching this limit. Event formats are documented in Reference for event emission. For Operational Decision Manager, you can configure event processing to avoid the risk of mappings explosion. See Operational Decision Manager event processing walkthrough. |
| In the BPEL Tasks dashboard, the User tasks currently not completed widget does not display any results. | The search that is used by the widget does not return any results because it
uses an incorrect filter for the task state. To avoid this issue, edit the filter in the User tasks currently waiting to be processed search. Set the state filter to accept one of the following values: TASK_CREATED, TASK_STARTED, TASK_CLAIM_CANCELLED, TASK_CLAIMED. |
| Historical Data Playback REST API | The API plays back data only from closed processes (completed or terminated). Active processes are not handled. |
| In Case dashboards, elapsed time calculations do not include late events: Average elapsed Time of completed activities and Average elapsed time of completed cases widgets. | Events that are emitted after a case or activity completes are ignored. But by setting the bai_configuration.icm.process_events_after_completion parameter to true, you can set the Case Flink job to process the events that are generated on a case after the case is closed. The start and end times remain unchanged. Therefore, the duration is the same but the properties are updated based on the events that were generated after completion. |
| Business Automation Insights deployment fails when Apache Kafka is installed in "All namespaces" and Business Automation Insights is deployed in a dedicated namespace. | Kafka monitors Custom Resources (CRs) across all namespaces, including the Business Automation Insights namespace. This causes conflicts when resources such as KafkaUsers or KafkaTopics share the same name across deployments. API requests that rely on short names become ambiguous or fail when both instances are visible from the same namespace. To solve this issue, deploy Apache Kafka in a namespace-scoped mode that does not include the Business Automation Insights namespace. |
IBM Business Automation Navigator
| Limitation | Description |
|---|---|
| Resiliency issues can cause lapses in the availability of Workplace after a few weeks. | This issue might be attributed to issues with the Content Platform Engine (cpe) pod. Use the
following mitigation steps:
|
| Task Manager is not supported when configuring with System for Cross-domain Identity Management (SCIM). | Task Manager requires an LDAP registry for user authorization. It is not supported in a deployment that is configured with SCIM. |
| After you update the schema name in the CR YAML, the schema name is not updated in the system.properties file and the Business Automation Navigator pod uses the old schema name. | You need to manually delete the system.properties file and restart the Business Automation Navigator pod so that it uses the new schema name. |
IBM Business Automation Application and IBM Business Automation Studio
| Limitation | Description |
|---|---|
| Installing the Application pattern with an Oracle database fails if you use Oracle Database 23c Free – Developer Release using Transport Layer Security (TLS) 1.3 | The Application
Engine and the
Application
Engine playback server do not
support Oracle Database 23c Free – Developer Release. The following exception is thrown:
|
| Process applications from Business Automation Workflow do not appear in Application Designer. | Sometimes the app resources of the Workflow server do not appear in Studio when you deploy
Workflow server instances, Studio, and Resource Registry in the same custom resource YAML
file. If you deployed Business Automation Studio with the Business Automation Workflow server in the same custom resource YAML file, and you do not see process applications from Business Automation Workflow server in Business Automation Studio, restart the Business Automation Workflow server pod. |
| The Business Automation Workflow toolkit and configurators might not get imported properly. | When you install both Business Automation Workflow on containers and Business Automation Studio together, the Business Automation Workflow toolkit and configurators might not get imported properly. If you don't see the Workflow Services toolkit, the Start Process Configurator, or the Call Service Configurator, manually import the .twx files by downloading them from the Contributions table inside the Resource Registry section of the Administration page of Business Automation Studio. |
Kubernetes kubectl known issue modified subpath
configmap mount fails when container restarts #68211. |
Business Automation
Studio related pods go
into a CrashLoopBackOff state during the restart of the docker service on a worker
node. If you use the Warning Failed 3m kubelet, <IP_ADDRESS> Error: failed to start container: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \" rootfs_linux.go:58: mounting To recover a pod, delete it in the OpenShift® console and create a new pod. |
|
To use Application Engine with Db2® for High Availability and Disaster Recovery (HADR), you must have an alternative server available when Application Engine starts. |
Application Engine depends on the automatic client reroute (ACR) of the Db2 HADR server to fail over to a standby database server. You must have a successful initial connection to that server when Application Engine starts. |
|
IBM Resource Registry can get out of sync. |
If you have more than one etcd server and the data gets out of sync between
the servers, you must scale to one node and then scale back to multiple nodes to synchronize
Resource Registry. |
|
After you create the Resource Registry, you must keep the replica size. |
Because of the design of etcd, changing the replica size can cause data
loss. If you must set the replica size, set it to an odd number. If you reduce the pod size, the
pods are deleted one by one to prevent data loss and the possibility that the cluster gets out of sync.
|
|
After you deploy Business Automation Studio or Application Engine, you cannot change the Business Automation Studio or Application Engine admin user. |
Make sure that you set the admin user to a sustainable username at installation time. |
|
Because of a Node.js server limitation, Application Engine trusts only root CA. |
If an external service is used and signed with another root CA, you must add the root CA as
trusted instead of the service certificate.
|
IBM FileNet® Content Manager
| Limitation | Description |
|---|---|
| A smaller number of indexing batches than configured leads to a noticeable degradation in the overall indexing throughput rate. | Because obsolete Virtual Servers in the Global Configuration Database (GCD) are not
automatically cleaned up, that Virtual Server count can be higher than the actual number of CE
instances with dispatching enabled. That inflated number results in a smaller number of concurrent
batches per CSS server, negatively affecting indexing performance. For more information and to resolve the issue, see Content Platform Engine uneven CBR indexing workload and indexing degradation. |
| Download just Jace.jar from the Client API Download area of ACCE fails. Result is the "3.5.0.0 (20211028_0742) x86_64" text returned. | When an application, like ACCE, is accessed through Zen, the application, and certain operator-controlled elements in Kubernetes need additional logic to support embedded or "self-referential" URLs. The single file download in the Client API Download area of ACCE uses self-referential URLs and the additional logic is missing. To avoid the self-referential URLs, download the whole Client API package that contains the desired file instead of an individual file. The individual file would then be extracted from the package. |
| Queries to retrieve group hierarchies using the SCIM Directory Provider might fail if one of the groups in the hierarchy contains a space or some other character not valid in an HTTP URL. | The problem can occur when performing searches of users or groups and the search tries to retrieve the groups a group belongs to. If one of these groups in this chain contains a space or other illegal HTTP URL character, then the search may fail. |
| LDAP to SCIM attribute mapping may not be correct. | The default LDAP to SCIM attribute mapping used by IM may not be correct. In particular, TDS/SDS LDAP may have incorrect mappings for the group attributes for objectClass and members. To learn more about how to review this mapping and change it, see Updating SCIM LDAP attributes mapping. |
| When using the SCIM Directory Provider to perform queries for a user or group with no search attribute, all users/groups are returned rather than no users or groups. | Queries without a search pattern are being treated as a wildcard rather than a restriction to return nothing. |
IBM Business Automation Workflow and IBM Workflow Process Service
For IBM Business
Automation Workflow, see
IBM
Business Automation Workflow known limitations.
For Workplace, see Workplace
limitations.
