IBM Support

IBM Cloud Pak for Business Automation 24.0.x Known Limitations

General Page

This web page provides a list of known limitations in IBM Cloud Pak® for Business Automation 24.0.x. Workarounds are provided where possible.

​​

IBM Cloud Pak for Business Automation 24.0.x

The following limitations pertain to IBM Cloud Pak for Automation 24.0.x releases. They are subject to change between releases.

 Table of contents

 
IssueDescription
Version
The steps and links for backing up and restoring IBM Cloud Pak foundational services are not available in the IBM Cloud Pak for Business Automation documentation. The IBM Cloud Pak for Business Automation disaster recovery solution requires backing up and restoring both IBM Cloud Pak for Business Automation components and IBM Cloud Pak foundational services.Symptom: You cannot back up and restore IBM Cloud Pak foundational services by using IBM Documentation.

Cause: No further information is available.

Solution: For assistance in backing up and restoring IBM Cloud Pak foundational services, contact the IBM Cloud Pak foundational services team- https://www.ibm.com/docs/en/cloud-paks/foundational-services/4.6?topic=about-support#support_case
 
24.0.0
 
IssueDescription
Version
When upgrading CP4BA 21.0.3 IF031 or later (cluster-scope with all-namespaces) to CP4BA 24.0.0 (cluster-scope with all-namespaces) , sometimes ibm-events-operator fails to migrate to new version.
Symptom: You will see an Operator failed message in the Openshift console for ibm-events-operator in openshift-operators namespace.

Cause:  During the migration from CPFS v3 to v4, all operators from ibm-common-services namespace to openshift-operators. The existing ibm-events-operator which is in ibm-common-services namespace did not deleted by ODLM pod.

Solution:
  • Login to Openshift CLI as administrator.
  • export ns=ibm-common-services
    oc delete csv -l operators.coreos.com/ibm-events-operator.$ns='' -n $ns
     
  •  ODLM will automatically reconcile and create ibm-events-operator in ibm-common-services namespace.  


 

24.0.0
 When upgrading CP4BA 21.0.3 IF031 or later (cluster-scope) to CP4BA 24.0.0 (cluster-scope) , sometimes
operandrequest- kafka-iaf-system has no status and ibm-events-operator subscription and CSV are not upgraded.
Symptom:  After applying the new CR during upgrade and CP4BA operator is running state the operandrequest-kafka-iaf-system has no status and you will see ibm-events-operator is not upgraded in ibm-common-services namespace.

Cause:  During the upgrade from CPFS v3 to v4 the ODLM pod got stuck when reconciling the operandRequests. 

Solution:
  • Login to Openshift CLI as administrator.
  • export ns=ibm-common-services
    oc get pods | grep -i operand-deployment-lifycle-manager
    oc delete pod <podname> #Replace <podname> from the above output.
     
  •  ODLM will automatically reconcile and create operandrequest-kafka-iaf-system and ibm-events-operator subscription and CSV in ibm-common-services namespace.  
24.0.0
When upgrading CP4BA 21.0.3 IF031 or later (cluster-scope) to CP4BA 24.0.0 (cluster-scope), sometimes zenService progress stuck at 4%.
Symptom:  After applying the new CR during upgrade and CP4BA operator is running state the operandrequest for Zen applied and operator is upgraded but zenservice progress is stuck at 4%. 

Cause:  During the upgrade from CPFS v3 to v4, sometimes postgresql-operator might not be ready to process new updates from Zen operator.

Solution:
  • Login to Openshift CLI as administrator.
  • export ns=ibm-common-services
    oc get pods | grep -i postgresql-operator-controller-manager
    oc delete pod <podname> #Replace <podname> from the above output.
     
  •  Zen operator will automatically reconcile and create the requires labels, annotations for PostgreSQL database for Zen. 
24.0.0
When deploying CP4BA 24.0.0, deployment may fail if the namespace contains only numeric characters.
 
 
 
 
 
Symptom: Deployment fails when trying to deploy in a namespace that contains only numbers.
 
Cause: The failure occurs when deployment cannot proceed because numeric values cannot be interpreted as strings in the namespace (such as: 2400, 1234, 2103). This will be resolved in future iFixes.
 
 
24.0.0
When upgrading CP4BA 23.0.2 IF004 or later (namespace-scope) to CP4BA 24.0.0 (namespace-scope), "usermgmt-ensure-tables-job" and "zen-minio-create-buckets-job" jobs are at version 5.1.2 version instead of version 5.1.4.Symptom:  After applying the new CR during upgrade and CP4BA operator is running state and zenService upgraded to new version 5.1.4 but these 2 jobs "usermgmt-ensure-tables-job" and "zen-minio-create-buckets-job" annotations still shows version 5.1.2. Having these jobs with old version will not cause any issues with zenService.

Cause:  There are no schema changes between these 2 versions which is causing these 2 jobs are still pointing to old version instead of new version.
24.0.0
When installing CP4BA 24.0.0 on a Hugepages enabled Openshift cluster with enabling hugepages configuration, on some Openshift clusters zenService does not progress to complete. 
Symptom:  zenService does not progress to complete and notice EDB Cluster CR zen-metastore-edb is in failed state.

Cause:  On some Hugepages enabled Openshift clusters this issue is observed where EDB PostgreSQL pods requires hugepages resource to be specified.

Solution:
  • Update EDB Cluster configuration for zen-metastore-edb under resources limits with an entry of hugepages type along with size value. (For example: hugepages-2Mi: "2Gi"
  •  Zen operator will automatically reconciles to complete deployment.
24.0.0

Default self-signed certificates for CPD route will not be renewed automatically when expired

Symptom:  Customer is seeing an expired certificate when connecting via the CPD route. This is causing issues with CP4BA as well such as when some expected services are hitting the CPD Route URL and are seeing the expired certificate.

Cause:  In CP4BA, if you are using the default CPD route certificate, it will expire after 90 days. The certificate stored in the iaf-system-automationui-aui-zen-cert secret will be updated by the cert manager and will be automatically renewed. However, the route will not be updated when that secret is changed.

Solution:
If you manually edit the ZenService CR, it will trigger the route to be updated with the secret. Hence if you do two quick updates to change the secret in the ZenService as shown here:
  •  oc patch ZenService iaf-zen-cpdservice --type='json' -p='[\{"op": "add", "path": "/spec/zenCustomRoute","value":{"route_secret":"dummy-value","route_reencrypt":true}}]' -n ${CP4BA_AUTO_NAMESPACE}
  •   oc patch ZenService iaf-zen-cpdservice --type='json' -p='[\{"op": "add", "path": "/spec/zenCustomRoute","value":{"route_secret":"iaf-system-automationui-aui-zen-cert","route_reencrypt":true}}]' -n ${CP4BA_AUTO_NAMESPACE}
then the CPD route certificate will be updated.
24.0.0
When upgrading CP4BA 21.0.3 IF031 or later (cluster-scope with all-namespaces) to CP4BA 24.0.0 (cluster-scope with all-namespaces) , sometimes cp-console-iam-provider route creation fails in ibm-common-services namespace.
Symptom: You will see common-UI operator fails to come up with OOMKilled error in openshift-operators namespace.

Cause: The common-UI operator pod was not able to come up due to insufficient memory.

Solution:
  • Login to Openshift CLI as administrator.
  • export operatorNamespace=<operator-namespace> # openshift-operators
    export commonuiCSV=$(oc -n $operatorNamespace get csv | grep ibm-commonui-operator | awk '{print $1}')
    oc -n $operatorNamespace patch csv $commonuiCSV --type="json" -p '[{"op": "replace", "path":"/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory", "value":"200Mi"}]'
     
  •  common-UI operator pod will start automatically with increased memory.
24.0.0
When installing CP4BA 24.0.0 for FNCM (or) ODM (or) Workflow runtime  capabilities with no BAI as an optional component , do not answer "Yes" when the cp4a-prerequisites.sh script asks for a question for using external PostgreSQL for BTS.
Symptom:  The cp4a-prerequisites.sh presents a question to for BTS which is not needed when just install FNCM (or) ODM (or) Workflow runtime capability in CP4BA.

Cause: This is a script issue.

Solution:
  • When cp4a-prerequisites.sh asks this question (Do you want to use an external Postgres DB [YOU NEED TO CREATE THIS POSTGRESQL DB BY YOURSELF FIRST BEFORE APPLY CP4BA CUSTOM RESOURCE] as BTS metastore DB for this CP4BA deployment?) when you choose to install FNCM (or) ODM with not selecting BAI optional component , please select "No". 
  • This is fixed in 24.0.0-IF001
24.0.0
When installing CP4BA 24.0.0 for BAA (or) ADS (or) ADP (or) Workflow authoring (or) BAI as an optional component, and answer "Yes" when the cp4a-prerequisites.sh script asks for a question for using external PostgreSQL for BTS. If you share same PostgreSQL database server as external PostgreSQL for IM/Zen, do not enable password authentication on PostgreSQL.
Symptom:  The cp4a-prerequisites.sh generate "userSecretName: bts-datastore-edb-user" in ibm-bts-config-extension configMap
 
Cause: The IM/Zen only support client certificate base authenticate on PosgresSQL, but BTS support both password based authentication and client certificate base authenticate on PostgreSQL
 
Solution:
* After cp4a-prerequisites.sh generate <cert-kubernetes>/scripts/cp4ba-prerequisites/secret_template/bts_external_db/ibm-bts-metastore-edb-cm.yaml file, remove "userSecretName: bts-datastore-edb-user" from ibm-bts-config-extension configMap definition.
* This is fixed in 24.0.0-IF001
 
24.0.0
During run the cp4ba-deployment.sh script in [upgradeOperator mode] for migration from elasticsearch to opensearch, the script will show two commands to scale down Business Performance Center (BPC) to ensure the prevention of dirty data generation while migration. If the CP4BA is installed into all namesapces, one of two commands will not show result that the BPC can not scale down as expected.
Symptom:  One of two  commands for scaling down Business Performance Center (BPC) will not show when run cp4a-deployment.sh in upgradeOperator mode for migration from elasticsearch to opensearch.
 
Cause: When CP4BA is installed into all namespaces, the cp4a-deployment.sh script fail to fetch the BAI insights engine operator name from all namespace (openshift-operators) project.
 
Solution:
* When do migration from elasticsearch to opensearch before upgrade from 21.0.3 (all namespaces),
 # oc scale --replicas=0 deployment.apps/iaf-insights-engine-operator-controller-manager -n openshift-operators
 # oc scale --replicas=0 deployment.apps/iaf-insights-engine-cockpit -n <cp4ba-ns>
 
* When do migration from elasticsearch to opensearch before upgrade from 23.0.2 (all namespaces)
 # oc scale --replicas=0 deployment.apps/ibm-insights-engine-operator -n openshift-operators
 # oc scale --replicas=0 deployment.apps/<cp4ba-cr-name>-insights-engine-cockpit -n <cp4ba-ns>
 
* This is fixed in 24.0.0-IF001
 
 
 
24.0.0
When installing CP4BA 24.0.0, 
running the scripts using MacOS may fail.
Symptom: Deployment fails when running deployment scripts using MacOs while creating subscription.
 
Cause: The failure occurs with error "invalid command code"when running deployment scripts using MacOs. This will be resolved in future iFixes.
 
During an upgrade from 22.0.2-IF006 to 24.0.0-IF001, the Identity Management (IM) service can get stuck when the upgrade of Cloud Pak foundational services is cluster-scoped to namespace-scoped.
Symptom: Zenservice gets stuck at 71%, and the Zen operator log reports that ibm-iam-request operandrequest is not ready.

Cause: The common-service-db EDB Cluster CR is not created yet due to missing an annotation .The annotation is added
into the secret by the job create-postgres-license-config , but at the same time the BTS operator is reconciling the same secret and removing the annotation.

Solution:
  • Go to the <namespace> of the CP4BA deployment that you are upgrading.

    export NAMESPACE = <namespace>
    oc project $NAMESPACE
  • Scale down the Business Teams Service (BTS) operator to 0.

    oc scale deployment ibm-bts-operator-controller-manager -n $NAMESPACE --replicas=0
  • Delete the Postgres batch job by running the following command.

    oc delete job create-postgres-license-config -n $NAMESPACE
  • Delete the ODLM operand pod by running the following command.

    oc delete pod -l name=operand-deployment-lifecycle-manager -n $NAMESPACE
  • Check that the Postgres job is restarted by running the following command.

    oc get pod -l job-name=create-postgres-license-config -n $NAMESPACE
  • Check the status of the Cloud Pak foundational services databases by running the following command.

    oc get cluster -n $NAMESPACE
  • Scale the Business Teams Service (BTS) operator back up to 1.

    oc scale deployment ibm-bts-operator-controller-manager -n $NAMESPACE --replicas=1
24.0.0
After a fresh install or upgrade to CP4BA 24.0.0.x on an All namespaces deployment with multiple CP4BA instances, the BAI in the CP4BA instances after the first CP4BA instance may require a restart to function normally

Symptom:

After a fresh install or upgrade to CP4BA 24.0.0.x in an All namespaces deployment with multiple CP4BA instances that include BAI,

the BAI instances that are deployed after the first BAI instance may not work as expected,

The error that appears in the “xxx-bai-content-xxx” pod will resemble this:

ERROR    Run request failed.
{
  "message": "Cannot read properties of undefined (reading 'get')",
  "status": 500,
  "name": "UndefinedError",

Cause:

The BAI pods need a restart to read properties in an All namespaces deployment.

Solution:

  1. Restart (i.e. delete) “xxx-insights-engine-management-xxxpod in the namespace of the failing BAI instance
  2. Restart (i.e. delete) the “xxx-bai-navigator-xxx” and “xxx-bai-content-xxxjobs in the namespace of the failing BAI instance

Only perform this workaround for the namespaces where the BAI is not working as expected.

The restart steps can be performed in the OCP console or on the command-line using commands like the following:

(Replace <ns> below with the namespace of the CP4BA instance where BAI is not working as expected)

oc -n <ns> get pods | grep insights-engine-management | awk '{print $1}' | xargs oc -n <ns> delete pod --force --grace-period=0
oc -n <ns> get job | grep bai-navigator | awk '{print $1}' | xargs oc -n <ns> delete job
oc -n <ns> get job | grep bai-content | awk '{print $1}' | xargs oc -n <ns> delete job

 

24.0.0
When upgrading CP4BA 21.0.3 IF031 or later to CP4BA 24.0.0-IF004 (Direct upgrade), sometimes zenService fails to upgrade and stuck at 71%.
Symptom:  The command ./cp4a-deployment.sh -m upgradeDeploymentStatus -n <namespace> shows zenService upgrade fails at 71%.

Cause: On some Openshift clusters this issue is observed due to timing issue where Common-Services-Operator fails to update ODLM operandRegistry with fallbackChannels for Operand Deployment Lifecycle Manager(ODLM) operator.

Solution:
  • Login to Openshift administration console
  • Expand Home -> Search -> Resources to select "OperandRegistry"
  • Change the project to CP4BA namespace when upgrading "shared2dedicated" (or) "openshift-operators" when upgrading to "clusterscope2clusterscope"
  • Delete the operandRegistry resource "common-service" 
  • Expand Workloads -> Pods 
  • Delete ibm-common-service-operator pod
  • Wait for ibm-common-service-operator pod to be restarted which automatically recreate deleted operandRegistry for ODLM.
  • Access generated operandRegistry to check fallbackChannels metadata.
  • zenService upgrade will progress with no issues.
24.0.0
When upgrading from any version of CP4BA that includes an upgrade of BTS in the 3.35 stream
Symptom: 
During an upgrade of BTS in the 3.35 channel, the common service operator may not get upgraded with the following sample error. 
constraints not satisfiable: no operators found from catalog ibm-bts-operator-catalog-v3-35-2 in namespace production referenced by subscription ibm-bts-operator, subscription ibm-bts-operator exists
This error is intermittent and not always encountered
Solution: 
Restart the ODLM pod will automatically unblock the upgrade process of Common Services operators
24.0.0/24.0.1
When upgrading CP4BA 21.0.3 IF031 to CP4BA 24.0.0-IF006 or later (Direct upgrade), zenService fails to upgrade and stuck at 4%.
Symptom:  The command ./cp4a-deployment.sh -m upgradeDeploymentStatus -n <namespace> shows zenService upgrade fails at 4%.

Cause:  This is observed due to an issue with the existing PostgreSQL 1.18.1 version which was part of 21.0.3-IF031 which is preventing Operand Deployment Lifecycle Manager(ODLM) to switch PostgreSQL channel automatically. This is not observed when upgrading from 21.0.3-IF039 to 24.0.0-IF006.

Solution:
  • Login to Openshift administration console
  • Expand Home -> Search -> Resources to select "Subscriptions"
  • Change the project to CP4BA namespace when upgrading "shared2dedicated"
  • Locate PostgreSQL subscription
  • Click on Pencil icon for "Update Channel"
  • Select the channel -> stable-v1.25
  • Wait for the operator to switch the channel and become available. If this process is taking time, restarting Operand Deployment Lifecycle Manager(ODLM) should do the job.
  • Wait for the zenService upgrade to finish.
24.0.0
When upgrading CP4BA 22.0.0 IF006 to CP4BA 24.0.0-IF006 or later (Direct upgrade), zenService fails to upgrade and stuck at either 4% or 37%.
Symptom:  The command ./cp4a-deployment.sh -m upgradeDeploymentStatus -n <namespace> shows zenService upgrade fails at 4% or 37% and the zenService Status shows
"supported values: \\"velero\\"","reason":"Invalid","details":{"name":"zen-metastore-edb","group":"postgresql.k8s.enterprisedb.io","kind":"Cluster","causes":[{"reason":"FieldValueNotSupported","message":"Unsupported value: \\"external-backup-adapter-cluster\\": supported values: \\"velero\\"","field":"metadata.annotations.k8s.enterprisedb.io/addons"}]},"code":422"

Cause: This is observed due to an issue with the existing PostgreSQL 1.18.1 version which is which is preventing Operand Deployment Lifecycle Manager(ODLM) to switch PostgreSQL channel automatically.
Solution:
  • Login to Openshift administration console
  • Expand Home -> Search -> Resources to select "Subscriptions"
  • Change the project to CP4BA namespace
  • Locate PostgreSQL subscription
  • Click on Pencil icon for "Update Channel"
  • Select the channel -> stable-v1.25
  • Wait for the operator to switch the channel and become available. If this process is taking time, restarting Operand Deployment Lifecycle Manager(ODLM) should do the job.
  • Wait for the zenService upgrade to finish.
 
When upgrading CP4BA 21.0.3 IF031 to CP4BA 24.0.0-IF006 or later (Direct upgrade), zenService fails to upgrade to the expected version 5.1.16
Symptom:  The command ./cp4a-deployment.sh -m upgradeDeploymentStatus -n <namespace> shows zenService is stuck during its upgrade to 5.1.16. You will also see that the operand request operandrequest-kafk-iaf-system is also stuck installing state along with other failed operandrequests.
Cause: The events operator subscription is stuck in an unhealthy state because its referenced InstallPlan is missing
Solution:
  • Login to Openshift administration console
  • Expand Home -> Search -> Resources to select "Subscriptions"
  • Change the project to CP4BA namespace
  • Locate ibm-events-operator subscription
  • Find the ClusterServiceVersion that is associated with this subscription. It will be listed under install version.
  • Delete the ClusterServiceVersion associated with the ibm-events-operator subscription.
  • Navigate back to the ibm-events-operator subscription page and click on the Actions dropdown on the right hand side and click on delete.
  • Wait for a new ibm-events-operator subscription to be created , shortly after which zenService upgrade will finish successfully. If this process is taking time, restarting Operand Deployment Lifecycle Manager(ODLM) should do the job.
 
 
LimitationDescription
Version
[24.0.1 IF005] CPDS Initialization fails with ADP Starter DeploymentSymptom: When deploying Starter deployment, with Document Processing selected, user is blocked from completing ADP functionality.

The following errors can be observed in the Content Operator logs: 

TASK [FNCM : Initialize Object Store for Capture.] ************ {"code": 500, "message": "Internal Runtime Server Error::com.filenet.api.exception.EngineRuntimeException FNRCE0023 ", "messageId": "FNRDD0501E"}}

TASK [FNCM : Upgrade ADP Teams] ************* {"code": 403, "message": "You have insufficient permission to complete this operation.", "messageId": "FNRDD1005E"}}

Cause: Initialization of the ADP object store (DEVOS1) fails.

Solution: Fix to come in future iFix. 
 
24.0.1-IF005
[24.0.1 IF005] When adding ADP Users through the BTS Teamserver UI with TDS/SDS LDAP -- UUID is displayed.  Symptom: Starting with 24.0.1 IF005, when adding users from the BTS Teamserver UI using a TDS/SDS LDAP type, the username is displayed as a UUID instead of the display name.

Cause: BTS 3.35.5 (part of the 24.0.01 IF005 release) updates the UI to display the SCIM ID instead of the display name.

Solution: BTS team will resolve in future fix. You may use the "Search" option in teams to search for your users by username, and the corresponding UUID will be displayed. This is a display-only issue, ADP functions remains for users assigned to their teams. 
 
24.0.1-IF005
If using IPv6 address for DB2 database with IBM Automation Document Processing, the CP4BA configuration (CR) must specify a hostname under "dc_ca_datasource" section.
Symptom: If CP4BA configuration is using an IPv6 address for DB2 database, Business Automations for document processing cannot be created.

Cause: Document processing cannot connect to the DB2 server using the IPv6 address.

Solution: In the CP4BA configuration (CR), under the section: "spec.datasource_configuration.dc_ca_datasource", specify the hostname (instead of IPv6 address) for the DB2 server in the property "database_servername".   
 
If the DB2 server is not resolvable by a hostname, then specify a placeholder URL-safe name in  the  "database_servername" (such as  database_servername: "db2-ipv6.myhost"), and specify the IPv6 address in the property "database_ip".
 
Important: This only applies to the section "spec.datasource_configuration.dc_ca_datasource" of the CR.  After applying the change, wait for at least 20 minutes for the Operator to apply the changes.
 
For example:
datasource_configuration:
  dc_ca_datasource: 
    database_servername: "db2-ipv6.myhost" 
    database_ip: "[2620:1f7:853:a00f:2022:aff:fe16:ca59]"
24.0.0
LimitationDescription
Version
Erroneous error in IBM Automation Document Processing daily cronjob. 
Symptom: You may see the following errors in the ADP's daily clean up cronjob pods
============
2024-06-14 01:30:05.301 |   ERROR | proj2 | ont1 | <transID> | cronjob | ibm_ca.common.database.db | Exception happened when executing the query: CronjobQueries.InsertUsageData
Traceback (most recent call last):
  File "/app/ibm_ca/common/database/db.py", line 59, in execute_single
  File "/app/ibm_ca/common/database/handlers/pg.py", line 99, in execute
  File "/app/.venv/lib/python3.11/site-packages/psycopg/cursor.py", line 732, in execute
    raise ex.with_traceback(None)
psycopg.ProgrammingError: the query has 14 placeholders but 17 parameters were passed
2024-06-14 01:30:05.303 |   ERROR | proj2 | ont1 | <transID> | cronjob | ibm_ca.common.database.db | Unexpected error occurred when running query.
Traceback (most recent call last):
  File "/app/ibm_ca/common/database/db.py", line 197, in db_query
  File "/app/ibm_ca/cronjob/handlers/aggregate_usage.py", line 45, in handler
  File "/app/ibm_ca/common/database/db.py", line 142, in query_runner_wrapper
  File "/app/ibm_ca/common/database/db.py", line 59, in execute_single
  File "/app/ibm_ca/common/database/handlers/pg.py", line 99, in execute
  File "/app/.venv/lib/python3.11/site-packages/psycopg/cursor.py", line 732, in execute
    raise ex.with_traceback(None)
psycopg.ProgrammingError: the query has 14 placeholders but 17 parameters were passed
2024-06-14 01:30:05.304 |   ERROR | proj2 | ont1 | <transID> | cronjob | ibm_ca.cronjob.jobs_pool | Error occurred in process pid=43 when running handler aggregate_usage on project project_guid='85163afd-f351-45b7-8688-2e51b39d4096', bas_id='AD24000'
Traceback (most recent call last):
  File "/app/ibm_ca/cronjob/jobs_pool.py", line 36, in _runner
  File "/app/ibm_ca/cronjob/handlers/aggregate_usage.py", line 45, in handler
  File "/app/ibm_ca/common/database/db.py", line 142, in query_runner_wrapper
  File "/app/ibm_ca/common/database/db.py", line 59, in execute_single
  File "/app/ibm_ca/common/database/handlers/pg.py", line 99, in execute
  File "/app/.venv/lib/python3.11/site-packages/psycopg/cursor.py", line 732, in execute
    raise ex.with_traceback(None)
psycopg.ProgrammingError: the query has 14 placeholders but 17 parameters were passed
2024-06-14 01:30:05.297 |   ERROR | proj1 | ont1 | <transID> | cronjob | ibm_ca.common.database.db | Exception happened when executing the query: CronjobQueries.InsertUsageData
Traceback (most recent call last):
  File "/app/ibm_ca/common/database/db.py", line 59, in execute_single
  File "/app/ibm_ca/common/database/handlers/pg.py", line 99, in execute
  File "/app/.venv/lib/python3.11/site-packages/psycopg/cursor.py", line 732, in execute
    raise ex.with_traceback(None)
psycopg.ProgrammingError: the query has 14 placeholders but 17 parameters were passed
2024-06-14 01:30:05.306 |   ERROR | proj1 | ont1 | <transID> | cronjob | ibm_ca.common.database.db | Unexpected error occurred when running query.
Traceback (most recent call last):
  File "/app/ibm_ca/common/database/db.py", line 197, in db_query
  File "/app/ibm_ca/cronjob/handlers/aggregate_usage.py", line 45, in handler
  File "/app/ibm_ca/common/database/db.py", line 142, in query_runner_wrapper
  File "/app/ibm_ca/common/database/db.py", line 59, in execute_single
  File "/app/ibm_ca/common/database/handlers/pg.py", line 99, in execute
  File "/app/.venv/lib/python3.11/site-packages/psycopg/cursor.py", line 732, in execute
    raise ex.with_traceback(None)
psycopg.ProgrammingError: the query has 14 placeholders but 17 parameters were passed
2024-06-14 01:30:05.307 |   ERROR | proj1 | ont1 | <transID> | cronjob | ibm_ca.cronjob.jobs_pool | Error occurred in process pid=40 when running handler aggregate_usage on project project_guid='06db3022-c31d-450b-bca6-67ed033a8219', bas_id='AD24000'
Traceback (most recent call last):
  File "/app/ibm_ca/cronjob/jobs_pool.py", line 36, in _runner
  File "/app/ibm_ca/cronjob/handlers/aggregate_usage.py", line 45, in handler
  File "/app/ibm_ca/common/database/db.py", line 142, in query_runner_wrapper
  File "/app/ibm_ca/common/database/db.py", line 59, in execute_single
  File "/app/ibm_ca/common/database/handlers/pg.py", line 99, in execute
  File "/app/.venv/lib/python3.11/site-packages/psycopg/cursor.py", line 732, in execute
    raise ex.with_traceback(None)
psycopg.ProgrammingError: the query has 14 placeholders but 17 parameters were passed
2024-06-14 01:30:06.415 |    INFO | <tenantId> | <ontology> | <transID> | <task> | ibm_ca.cronjob.jobs_pool | Jobs on 4 projects have completed. Time elapsed: 1.300
============

Cause: As long as the cron job has the status of Completed, this erroneous error can be ignored.  

Solution:  This will be fixed in 24.0.0-IF001
24.0.0
 
LimitationDescription
Version
None reported
Symptom:
 

Cause:


Solution:
24.0.0

IBM Business Automation Navigator

LimitationDescription
Version
Navigator legacy route URL doesn't work.
Symptom: You will see a "The repository is not available" error dialog when trying to login Navigator with legacy route.

Cause: Navigator can't connect to its FileNet Content Manager repository with legacy route, and the desktop can't load.

Solution:
This is fixed in 24.0.0-IF001.
For 24.0.0, set CR parameter "shared_configuration.sc_skip_ldap_config: false" can make it work as a workaround, or as an alternative, use Zen front door route instead.
24.0.0
Content Manager Ondemand Line data HTML viewer not able to view document if CSP is enabled.
Symptom: You will see below error in browser console when open document in line data html viewer:
"Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'sha256-HzHn89hNSyeDbBa3HESt9zJEhKVtyNOj1ajlidQoh0I=' 'nonce-aL8Yn6omzDTVHQLLCDbskA==' 'self' https:". Either the 'unsafe-inline' keyword, a hash ('sha256-wHZ2dTuzXS5PxiAMtkbNDB+uw7UH4a6AuKZuVZugiTE='), or a nonce ('nonce-...') is required to enable inline execution."
 
Cause: Line data HTML viewer html file contains js code.

Solution:
This will be fixed in24.0.0-IF001.
 

IBM Business Automation Workflow 

LimitationDescription
Version
If you use the case feature of Business Automation Workflow, BAW and CPE "replica" sizes must be set to 1.

Symptom: There is a potential risk of a database deadlock on the Content Platform Engine side, and this deadlock causes the Business Automation Workflow and Content Platform Engine integration code to fail.

Solution:  Do not set the value for the BAW and CPE "replica" size to more than 1 in the CP4BA custom resource (CR) file. This issue is fixed in CP4BA 24.0.0 IF007.

24.0.0
When deploying CP4BA 24.0.0 using EDB, deployment may fail if database server name and database name are not set in CR.

Symptom: The workflow operator fails with error:

spec.database.server_name and spec.database.database_name must be specified, or spec.database.jdbc_url must be specified.
 

Cause:  Workflow operator validate database input even when using EDB Postgres.

Solution:  Update by setting a non-empty value to baw_configuration[x].database.server_name and baw_configuration[x]..database.database_name.

24.0.0
After upgrading CP4BA 230.2 to CP4BA 24.0.0, nginx pods are in crash status.
Symptom: Nginx pods are in crash status like:
ibm-nginx-686b844d57-fpxcw         1/2  CrashLoopBackOff  15 (77s ago)     57m
ibm-nginx-686b844d57-hcr5z         1/2  CrashLoopBackOff  15 (62s ago)     57m
ibm-nginx-686b844d57-nhd54         1/2  CrashLoopBackOff  15 (119s ago)    56m
ibm-nginx-tester-c648f79c4-4lg69   1/2  CrashLoopBackOff  14 (4m44s ago)   56m
 
Cause:  Duplicate ZenExtension CRs are created for the same workflow instance.
 
Solution: 
  1. Get ZenExtension oc get zenextension|grep baw-server-zen-extension
  2. Delete the ZenExtension with name <Namespace>-<CRName>-<BAWInsatnceName>-baw-server-zen-extension
  3. Restart zen-watcher pod
  4. Restart nginx pods
 
 
During an upgrade from 22.0.2 to 24.0.0-IF001,  the Process Federation Server is not automatically removed for Workflow Authoring deployment.

Symptom:  Process Federation Server pods exist after upgrading from 22.0.2 to 24.0.0-IF001 for Workflow Authoring deployment.

Cause:  Workflow Authoring uses an embedded Process Federation Server. The independent Process Federation Server deployment should be removed.

Solution:  Run the following command to remove Process Federation Server.

oc delete PFS <pfs-cr-name>

Where <pfs-cr-name> is the <meta.name> of your ICP4ACluster deployment.

 
After upgrading from 21.0.3 IF034 to 24.0.0 IF001, the Business Automation Workflow instance will throw exceptions in the logs.Symptom: In Business Automation Workflow Runtime/Authoring instances, the instance pod will throw exceptions such as:
============
17T08:09:14.194+0200","module":"com.ibm.oi.icm.event.emitter.IcmOiEmitter","loglevel":"SEVERE","ibm_sequence":"1726553354194_00000000003AF","ibm_exceptionName":"java.lang.RuntimeException","ibm_stackTrace":"java.lang.RuntimeException: com.filenet.api.exception.EngineRuntimeException: FNRCE0051E: E_OBJECT_NOT_FOUND: The requested item was not found. Object store FNTOSDS not found. errorStack={\n\tat com.filenet.engine.gcd.GCDHelper.getObjectStoreId(GCDHelper.java:329)\n\tat com.filenet.engine.retrieve.IndependentClassRetriever.getObject(IndependentClassRetriever.java:558)\n\tat
============

Cause: When executing our upgrade scripts, the script sets the incorrect value for the CR section spec.workflow_authoring_configuration.case.event_emitter.[0].tos_name. The script sets the datasource name instead of the Target Object Store name.

Solution: Update the Custom Resource (CR) section spec.workflow_authoring_configuration.case.event_emitter.[0].tos_name, to the correct value for the Target Object Store name.
24.0.0 IF001

IBM Business Automation Workflow Process Service

LimitationDescription
Version
When upgrading CP4BA 21.0.3 IF033 or later to CP4BA 24.0.0 with embeded EDB, the IBM Business Automation Workflow Process Service can't be upgrade successfully.

Symptom:  The WfPS operator fails with error:
Can't change image name and configuration at the same time. There are differences in PostgreSQL configuration parameters: {\"ssl_min_protocol_version\":\"TLSv1.2\",\"TLSv1.3\"]}"}.

 

Cause:  WfPS create embeded EDB with new parameter "ssl_min_protocol_version: TLSv1.2" since 21.0.3-IF033.

Solution:  Running command “oc edit cluster <edb-instance-name>”.  And change "ssl_min_protocol_version: TLSv1.2" to "ssl_min_protocol_version: TLSv1.3".

24.0.0
LimitationDescription
Version

None reported

Symptom:

Cause:

Solution:

24.0.0
LimitationDescription
Version
The Business Teams Service/Operator pods do not start after upgrading from 21.0.3 to 24.0.0
Symptom:
After upgrading from 21.0.3 to 24.0.0, the Business Teams Service/Operator pods do not start.

Cause:
  • In some cases, if the Business Teams Service operator version 3.24 or earlier is installed in several namespaces in parallel, the Business Teams Service does not start.
  • The upgrade script (cp4a-deployment.sh) will scale down ibm-bts-operator-controller-manager operator in ibm-common-services project when running the [-m upgradeOperator] mode to upgrade CP4BA Cluster-scoped -> Namespace-scoped or all-namespace -> all-namespace.

Solution:
  • Install the latest interim fix for CP4BA 21.0.3. The Business Teams Service version must be 3.33.1 before upgrading to 24.0.0
  • start ibm-bts-operator-controller-manager operator in ibm-common-services project manually after run ./cp4a-deployment -m upgradeDeploymentStatus mode using below command:
       #  oc scale --replicas=1 deployment ibm-bts-operator-controller-manager -n ibm-common-services
 
24.0.0
 

IBM Automation Workstream Services

LimitationDescription
Version
When deploying CP4BA 24.0.0 using EDB, deployment may fail if database server name and database name are not set in CR.

Symptom: The workflow operator fails with error:

spec.database.server_name and spec.database.database_name must be specified, or spec.database.jdbc_url must be specified.
 

Cause:  Workflow operator validate database input even when using EDB Postgres.

Solution:  Update by setting a non-empty value to baw_configuration[x].database.server_name and baw_configuration[x]..database.database_name.

24.0.0
After upgrading CP4BA 230.2 to CP4BA 24.0.0, nginx pods are in crash status.

Symptom: Nginx pods are in crash status like:
ibm-nginx-686b844d57-fpxcw         1/2  CrashLoopBackOff  15 (77s ago)     57m
ibm-nginx-686b844d57-hcr5z         1/2  CrashLoopBackOff  15 (62s ago)     57m
ibm-nginx-686b844d57-nhd54         1/2  CrashLoopBackOff  15 (119s ago)    56m
ibm-nginx-tester-c648f79c4-4lg69   1/2  CrashLoopBackOff  14 (4m44s ago)   56m
 
Cause:  Duplicate ZenExtension CRs are created for the same workflow instance.
 
Solution: 
  1. Get ZenExtension oc get zenextension|grep baw-server-zen-extension
  2. Delete the ZenExtension with name <Namespace>-<CRName>-<BAWInsatnceName>-baw-server-zen-extension
  3. Restart zen-watcher pod
  4. Restart nginx pods
 
 

IBM Automation Decision Services

LimitationDescription
Version
 
None reported
  
 

IBM Content collector for SAP Applications

LimitationDescription
Version
None reported  

 

IBM Enterprise Records

LimitationDescription
Version
 
You cannot connect to the IBM Enterprise Records (IER) desktop.
Symptom:
When users try to access the IER desktop, they get the following error:
Cannot connect to the web client
Their logs shows the following exception:
java.lang.NoClassDefFoundError: org.apache.xerces.xni.parser.XMLEntityResolver


Cause:
After IERApplicationPlugin.jar is moved to Java 17, it cannot resolve the path to XercesImpl.jar in the IBM Content Navigator (ICN) container.


Solution:
  1. Download the ier-library.xml file from https://github.com/ibm-ecm/ier-samples. 
  2. Put the file into the ICN container in the following path:
    /opt/ibm/wlp/usr/servers/defaultServer/configDropins/overrides
  3. In the same path, create a folder named ier-jars.
  4. Copy and paste xercesImpl.jar from <ier_on-prem_installation>\API\JARM to the ier-jars folder.
  5. Restart the ICN pod and open the IER desktop.
You should now have access to the desktop.
24.0.0
 
 

[{"Type":"MASTER","Line of Business":{"code":"LOB76","label":"Data Platform"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSBYVB","label":"IBM Cloud Pak for Business Automation"},"ARM Category":[{"code":"a8m0z0000001iUCAAY","label":"Operate-\u003EUMS Install\\Upgrade\\Setup"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"24.0.0;24.0.1"}]

Document Information

Modified date:
03 November 2025

UID

ibm17138089