Known limitations and issues
Before you use IBM Cloud Pak for Automation, make sure that you are aware of the known limitations.
- IBM Automation Digital Worker
- IBM Business Automation Studio
- User Management Service
- IBM Content Navigator
- IBM Business Automation Insights
- IBM Business Automation Content Analyzer
- IBM Business Automation Configuration Container
IBM Automation Digital Worker
New in 19.0.3| Limitation or issue | Description |
|---|---|
Tasks that are running on a runtime Pod cannot complete their run if:
|
Before you scale down, make sure that no tasks are running. Be prepared to restart a task after a runtime Pod crash. |
| Local discrepancies can occur if several UMS users are connected to the same Digital Worker instance at the same time. | If several UMS users are connected to the same Digital Worker instance, the persistence is ruled
according to the last modification that was done. This can lead to local discrepancies, especially
if different users are editing the same artifact. You can resolve those discrepancies by refreshing
your current page. For example, if you are deploying a task and get an error, this can be because another user added some incomplete instructions in the meantime, which prevents any deployment at this moment. |
| No automatic validation is done on instructions or schemas when tasks are auto-saved. | When you are working on tasks, your work is saved automatically, but the instructions and the schemas are not validated. Validation of these artifacts is done when you deploy the task. |
| You must not have different versions of a skill in the same task. | You must not have different versions of a skill in the same task, even though there is no warning message if this occurs. |
| Skills in a task must have a unique name. | You must not have multiple skills with the same name in the same task, even though there is no warning message if this occurs. To be able to call each skill, they must have unique names. |
| Cannot check the version of a skill template after you configure it. | After you configure a skill from a skill template, you cannot see the version of the original template anymore. |
| Elasticsearch index limitation when you send tracking data to IBM Business Automation Insights. | By default, an Elasticsearch index can contain only 1000 different fields. If you do not change your Business Automation Insights Elasticsearch settings, make sure to not send more than 950 different tracking data. |
| Sending multiple emails with the Send email skill might not be possible because of mail providers restrictions. | Every time the Send email skill runs, a connection is opened, a mail is sent, and finally the connection is closed. It is not suitable if you want to send multiple emails because of mail providers limitations (for example, the limit with a Gmail account is 100 email per day). |
| External services can have limitations that prevent you from running skills in parallel. | If you run several skills in parallel, you must make sure that those skills can support it. For example, IBM Watson® Visual Recognition returns the error "Too Many requests" if
you do not respect the product limitations. |
| If you create an Array object in the task instructions and pass it into a skill, array instanceof Array returns false in the code of the skill. | If you create an Array object in the task instructions and pass it into a skill, array instanceof Array returns false in the code of the skill. However, you are still able to access the array elements as expected. |
| The timeout of a task run is 1 hour by default, and cannot be overridden if you schedule a task or run from the user interface. | By default, when a task is run, it times out after 1 hour. When you run a task on demand, you can override that timeout. However, you cannot override the timeout if you schedule a task to run, or if you run a task from the user interface. |
| You must scale the number of Pods (replicas) if you reach 1 CPU per Pod. | If you reach 1 CPU per Pod, you must scale the number of Pods (replicas) because adding more CPU does not help scaling. |
| You must log in again to access the Kibana performance dashboard. | When you open your performance dashboard from the Monitor view in Digital Worker, you might need to log in again to access the Kibana dashboard. This is because Business Automation Insights does not use UMS. |
| Only Coordinated Universal Time is supported when you schedule tasks to run. | You must use Coordinated Universal Time when you set the schedule for a task, and not your local time. |
| OpenID subject name is displayed as the owner of a task. | In Digital Worker, the name that is displayed as the owner of a task is the user's OpenID subject, and not the user's full name. |
| Schedules for tasks are removed when you undeploy. | When you undeploy a task, if there is a schedule set for this task, it is removed. You must set it again when you redeploy the task. |
| When you deploy a skill that you developed with the skill toolkit, you must bundle any private packages not available on public NPM with the skill. | If you use private packages that are not available on public NPM when you deploy skills that you developed from scratch, you must bundle those private packages with your skill. Otherwise, the deployment of your skill fails. |
| Public internet connection is required. | The cluster on which the skills are installed must be connected to the public internet. |
| Digital Worker cannot connect to SaaS versions of IBM Operational Decision Manager and IBM Business Automation Content Analyzer. | If you want to use skills that are connected to IBM Operational Decision Manager and IBM Business Automation Content Analyzer, you must configure those skills with on-premises versions of those products. |
| Run results are removed after one hour. | When you run a task, you can access the results for one hour after the start of the run. |
| A client_ID shared across several applications is not supported. | If the client_ID exists but the redirect_URL changes,
Digital Worker installs without error.
However, the designer displays an error that states that the redirect URL is not correct. To resolve
this error, you must reinstall Digital Worker
with a new client_ID or after the current client_ID deletion.
To avoid this issue, do not share a client_ID across several applications. |
IBM Business Automation Studio
New in 19.0.2| Limitation | Description |
|---|---|
Kubernetes kubectl known issue https://github.com/kubernetes/kubernetes/issues/68211. |
Business Automation Studio related pods go into a
CrashLoopBackOff state during the restart of the docker service on a worker node.
If you use the Warning Failed 3m kubelet, 172.16.191.220 Error: failed to start container: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/69e99228-b890-11e9-81c0-00163e01b43c/volume-subpaths/key-trust-store/ibm-dba-ums/2\\\" to rootfs \\\"/var/lib/docker/overlay2/0f50004756b6e39e6da16f57d7fdb9f72898bc8a5748681218a7b1b20eb0612b/merged\\\" at \\\"/var/lib/docker/overlay2/0f50004756b6e39e6da16f57d7fdb9f72898bc8a5748681218a7b1b20eb0612b/merged/opt/ibm/wlp/usr/shared/resources/security/keystore/jks/server.jks\\\" caused \\\"no such file or directory\\\"\"": unknown To recover a pod, delete it in the OpenShift console and create a new pod. |
|
IBM Resource Registry can have only one etcd server. |
App resources are not registered successfully in Resource Registry if you have more than one etcd server. If the data gets out of sync between the servers, you must scale to one node and then scale back to multiple nodes before Resource Registry works again. |
|
To use IBM Business Automation Application Engine (App Engine) with Db2® for High Availability and Disaster Recovery (HADR), you must have an alternative server available when App Engine starts. |
App Engine depends on the automatic client reroute (ACR) of the Db2 HADR server to fail over to a standby database server. There must be a successful initial connection to that server when App Engine starts. |
User Management Service
New in 19.0.2| Limitation or issue | Description |
|---|---|
| Error message CWWKS1424E | When you start UMS for the first time, initialization causes error message CWWKS1424E to be logged twice. You can safely ignore the first two occurrences of this error. |
IBM FileNet Content Manager
New in 19.0.3| Limitation or issue | Description |
|---|---|
| Zero-byte file causes error message | When you deploy the external share container with the operator, but do not use the external LDAP settings, a zero-byte file called ibm_ext_ldap_AD.xml is created inside the container. This file triggers an invalid file error message. To solve the issue, you must manually delete the file from the disk. |
| Process Engine functions are not supported with UMS integration | If you plan to use Process Engine functions, such as validating workflow, do not configure UMS integration with Content Platform Engine. |
| Deployment with an operator cannot support creating more than one data source with Oracle database. | Although the custom resource supports more than one data source, the generic secret cannot support more than one, because each data source might have distinct user names and passwords. |
Setting autoscale=true is not supported on OpenShift Cloud Platform
V4.2 |
When autoscale=true is set, creating an object store causes an error that a
uniqueness has been violated. This occurs because of the timeout of the sticky session. |
IBM Content Navigator
New in 19.0.3| Limitation or issue | Description |
|---|---|
| Cannot upload plug in when Navigator mode is set to 0. | When the Navigator mode is Platform (0), the Upload File Path on
Server option is unavailable for uploading a plug in. To work around this limitation:
|
IBM Business Automation Insights
- Limitations that apply to Kubernets deployment only
-
Table 6. Limitations to Kubernetes deployment Limitation or issue Description Security of communications to Kafka Salted Challenge Response Authentication Mechanism (SCRAM) is not supported. Plain SSL, SSL with username and password, SSL with Kerberos authentication, and Kerberos authentication are supported. - Limitations that apply to both Kubernets and single node deployments
-
Table 7. Limitations common to Kubernetes and single node deployment Limitation or issue Description Elasticsearch docker images Elasticsearch and Kibana docker images with X-Pack installed are not supported. IBM Business Automation Insights 19.0.2 and later supports the capability to take snapshots and restore internal user database, action groups, roles, and role mappings. But alternative authentication methods through SAML or OpenID Connect are provided as a technology preview, that is, without any support from IBM. Elasticsearch indexes Defining a high number of fields in an Elasticsearch index might lead to a so-called mappings explosion , which might cause out-of-memory errors and difficult situations to recover from. The maximum number of fields in Elasticsearch indexes that are created by IBM Business Automation Insights is set to 1000. Field and object mappings, and field aliases, count towards this limit. Ensure that the various documents that are stored in Elasticsearch indexes do not lead to reaching this limit.
For more information, see BPMN summary event formats, Case and activity summary event formats, and the Settings to prevent mappings explosion page of the Elasticsearch documentation.
Case dashboard elapsed time calculations do not include late events: Average elapsed Time of completed activities and Average elapsed time of completed cases widgets. New in 18.0.2 : Events that are emitted after a case or activity completes are ignored. New in 18.0.2 Case activity charts If you installed IBM Business Automation Insights with the Case event emitter from IBM Business Automation Workflow 18.0.0.2 or earlier, or from IBM Case Manager 5.3.3 interim fix IF003 or earlier, case activity charts might not reflect correct data. To avoid the issue, use the case event emitter from IBM Business Automation Workflow 18.0.0.2 interim fix 8.6.10018002-WS-BPM-IFPJ45625 or IBM Business Automation Workflow 19.0.0.1 or higher. If some older Elasticsearch or HDFS data is pushed by the older Case event emitter, follow the replay procedure to clear the old data. Use the new Case event emitter that is released with IBM Business Automation Workflow 18.0.0.2 interim fix 8.6.10018002-WS-BPM-IFPJ45625, or IBM Business Automation Workflow 19.0.0.1 or higher fix pack to push the events.
For more information about the replay procedure, see Replaying Case events.
The User tasks currently not completed widget in the BPEL Tasks dashboard doesn't display any results. The search that is used by the widget does not return any results because it uses an incorrect filter for the task state. To avoid this issue, edit the filter in the User tasks currently waiting to be processed search. Set the state filter to accept one of the following values: TASK_CREATED, TASK_STARTED, TASK_CLAIM_CANCELLED, TASK_CLAIMED.
Historical Data Playback REST API The API plays back data only from closed processes (completed or terminated). Active processes are not handled. New in 19.0.1 Alerting feature on Kibana graphical user interface not usable. In the Kibana interface, the Alerting menu item for monitoring your data and automatically sending alert notifications is present but the underlying feature is not enabled. - New in 19.0.3 Limitations that apply to single node deployment only
-
Table 8. Limitations to single node deployment Limitation or issue Description Elasticsearch and Kafka You can use only embedded Elasticsearch and Confluent Kafka. Security of communications to Kafka Salted Challenge Response Authentication Mechanism (SCRAM), plain SSL, SSL with Kerberos authentication, and Kerberos authentication are not supported. Only SSL with user name and password is supported. Apache ZooKeeper The version of ZooKeeper that is bundled with Confluent Kafka does not support SSL. For more information, see the ZooKeeper page of the Confluent Kafka documentation. Apache Flink - The Flink web interface is not accessible from Chrome on MacOS Catalina.
- The Flink job manager and the task manager are not automatically restarted after a machine reboot. Therefore, you must restart IBM Business Automation Insights.
Event emitters The event emitters for FileNet® Content Manager ( content) and for IBM Automation Digital Worker (adw) are not supported.HDFS Injection of events to an HDFS data lake is supported but with no authentication mechanisms.
IBM Business Automation Content Analyzer
New in 19.0.1| Limitation or issue | Description |
|---|---|
| New in 19.0.1 Document classification issue for PDF | If you upload a searchable PDF the document cannot be classified. |
IBM Business Automation Configuration Container
The container is Removed in 19.0.3 .
| Limitation | Description |
|---|---|
| LDAP user cannot log in to the FileNet P8 domain created by IBM FileNet Content Manager initialization. |
Only an admin user and group set in the initialization can log in to Content Navigator. Other LDAP users cannot log in to the domain. The security setting is different from the domain when you first log in to the Administration Console for Content Platform Engine. If the domain is created manually in the administration console, the problem does not exist. In the Administration Console for Content Platform Engine, update the security settings on the Content Platform Engine domain to add AUTHENTICATED-USERS. |
Kubernetes kubectl known issue https://github.com/kubernetes/kubernetes/pull/67817. |
When Business Automation Configuration Container checks a deployment
status by using kubectl rollout status, it might return timeout errors due to a
kubectl known issue. Product provisions fail as a result. Redeploy the configuration in Business Automation Configuration Container to try to complete the installation. |
| Business Automation Configuration Container cannot be used to add products that are not in the initial installation of a deployment. | Business Automation Configuration Container speeds the deployment of
product capabilities into IBM Cloud Private. Business Automation Configuration Container is not a replacement for IBM Cloud Private and Helm management. If you want to add a
product or component to the workload on IBM Cloud Private, you must create a new deployment. You have three options to do the deployment.
For more information, see Naming the deployment. |
| When you start the Business Automation Configuration Container service from within IBM Cloud Private, the welcome page can be blocked by Symantec services, like Blue Coat. | To resolve the issue, add the URL to your white list or filter list. |
| When you use Internet Explorer or Edge browser to access Business Automation Configuration Container, the log output might not display as expected. | On the deployment result page, the job container displays the log output in real time. When
you use an Internet Explorer or Edge browser, sometimes no log output displays and the deployment
progress continues to show as not started. The EventSource API that is used to fetch the server log information is not supported by Internet Explorer and Edge browsers, as explained in the following information: https://developer.mozilla.org/en-US/docs/Web/API/EventSource#Browser_compatibility To resolve the issue, use other browsers like Chrome, Safari, and Firefox. |
| A configuration file that is exported as part of an edit and redeploy operation must not be used as an imported configuration for a subsequent deployment. | When a configuration file is exported during an edit and redeploy, the export process excludes certain settings. The missing settings can cause the new deployment to fail. |
| If the value for the Object store Data table space in the Enable workflow settings is not specified in all uppercase letters, the initialization fails. | Table space names that are used by Content Platform Engine must contain only alphanumeric and underscore characters. Names must start with a letter of the alphabet and must be at most 18 characters long. Any table space that is used by the workflow system must have an all uppercase name. If you plan to use the Content Platform Engine object store table space for the workflow system, use an uppercase name for that table space. |
| The product name in metering reports is incorrect. | If IBM Business Automation Workflow is the first product
you install on IBM Cloud Private, metering reporting
items from all IBM Cloud Pak for Automation deployed
products are assigned to IBM Business Automation Workflow
Enterprise. To prevent metering from using the incorrect name, install one of the other IBM Cloud Private products before you install IBM Business Automation Workflow. |
| New in 19.0.1 The qualification in the catalog is incorrectly displayed as 1 months. The duration must match the duration value in the qualification.yaml file. | Obtain the patch details for IBM Cloud Private
3.1.2 from Fix Central. Select interim fix ID: icp-3.1.2-build522133-26442 |