IBM Support

IBM Cloud Pak for Business Automation CP4BA-22.0.1 Known Limitations

General Page

This web page provides a list of known limitations in IBM Cloud Pak for Business Automation CP4BA-22.0.1. Workarounds are provided where possible.

The limitations shown on this page are subject to updates between releases. You can also find limitations listed in IBM Documentations:
* IBM Cloud Pak for Business Automation CP4BA-22.0.1 Known Limitations.

You can find the download document for CP4BA-22.0.1 at:
* IBM Cloud Pak for Business Automation CP4BA-22.0.1 download document.

You can find the fix list document for CP4BA-22.0.1 at:
* IBM Cloud Pak for Business Automation CP4BA-22.0.1 fix list document.

The following table shows the relevant versions of the product in the Version column.

OPER-Operator

Limitation Description
Version
Network Policies from 20.0.3 do not work with 21.0.3 (blocking upgrade at Commerzbank)

Network Policies from 20.0.3 do not work with 21.0.3 (blocking upgrade at Commerzbank). The behaviour changed due to architectural changes, introduction of Zen, Postgresql, BTS.

We were able to proceed with the upgrade by allowing a wide network policy (allow-all enabled traffic back), but need to further investigate and ensure that Network Policies required by the customer will be supported in 21.0.3.

Few considerations:
Network policies referred by openshift docs:
allows-same-namespace
allow-from-openshift-ingress

deny all was supported in 20.0.3. For 21.0.x and up, if deny all is in place, a set of Network policies are documented here:
https://www.ibm.com/docs/en/cpfs?topic=operator-installing-network-policies

Need to identify the right Network Policies in a holistic fashion, pertaining to Cloud Pak, CS, Zen, BTS and postgresql, OCP (the entire stack used by customer).

Matching CS defect:

https://github.ibm.com/IBMPrivateCloud/roadmap/issues/54033





Document in cp4ba the network policy

CP4BA-22.0.1, CP4BA-21.0.3-Maintenance

ADP-ContentAnalyzer

Limitation Description
Version
Deploy the second ADP project failed with the aca certificate expired issue in all namespace scenario

We are using an environment with a shared cp4a operator (openshift-operators). When we try to create a second ADP project in a second namespace, the operator fails on the CA Deployment step due to an aca cert issue. We are able to create the first ADP namespace fine. 

We reproduced this issue twice, Derek tried to help us workaround DBACLD-50638 it by increasing cpu and memory which did not work


=== Operator error


"deploy_failed": "---\nFAIL - CA Deployment failed\nThe conditional check 'aca_root_ca_content.data|length != 0 and \n(aca_ca_crt.stat.exists == false or\n(check_aca_ca_crt is defined and (check_aca_ca_crt.msg == \"The certificate file is expired\"))) \n' failed. The error was: error while evaluating conditional (aca_root_ca_content.data|length != 0 and \n(aca_ca_crt.stat.exists == false or\n(check_aca_ca_crt is defined and (check_aca_ca_crt.msg == \"The certificate file is expired\"))) \n): 'dict object' has no attribute 'msg'\n\nThe error appears to be in '/opt/ansible/roles/CA/tasks/pres/create-certs.yml': line 106, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n block:\n - name: \"Write ACA RootCA key to file in '{{ ca_certs_dir }}' when it is non-exists or expired\"\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \"{{ foo }}\"\n\n"


=== CR status:

FAIL - CA Deployment failed  The conditional check 'aca_root_ca_content.data|length != 0 and   (aca_ca_crt.stat.exists == false or  (check_aca_ca_crt is defined and (check_aca_ca_crt.msg == "The
certificate file is expired")))   ' failed. The error was: error while evaluating conditional
(aca_root_ca_content.data|length != 0 and   (aca_ca_crt.stat.exists == false or  (check_aca_ca_crt is defined and (check_aca_ca_crt.msg == "The
certificate file is expired")))   ): 'dict object' has no attribute 'msg'
The error appears to be in
'/opt/ansible/roles/CA/tasks/pres/create-certs.yml': line 106, column 7,
but may  be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:  block:
- name: "Write ACA RootCA key to file in '<< ca_certs_dir }}' when it is non-exists or expired"
^ here
We could be wrong, but this one looks like it might be an issue with  missing quotes. Always quote template expression brackets when they  start a value. For instance:  with_items:
- << foo }}  Should be written as:  with_items:
- "<< foo }}"  reason: Failed
status: 'False'
type: Ready

CP4BA-22.0.1, CP4BA-21.0.3-Maintenance

BTS

Limitation Description
Version
BTS didn't recovery successfully: Errors "table META_DATA does not exist" in Postgres logs after recovery from AWS S3

Backup and restore BTS according document: https://www.ibm.com/docs/en/cloud-paks/1.0?topic=service-backup-restore, but it seems BTS recovery didn't work well.

The procedure is below:

1. On Source Environment:

a. backup secret for bts database


kubectl get secret ibm-bts-cnpg-bawent-cp4ba-bts-app -o yaml \
| yq eval 'del(.metadata.annotations, .metadata.creationTimestamp, .metadata.ownerReferences, .metadata.resourceVersion, .metadata.uid)' - >> ibm-bts-cnpg-bawent-cp4ba-bts-app.yaml

The secret would look like:

apiVersion: v1
data:
password: a1Q5b0VYTjBBQlo1Y0M3dUxuWUg4NU5Lb0hQQ2dnVmxuaVlJRVRpaEg5djF3V2JPRGJ5bDB4MHBGWXNabUhUUQ==
pgpass: aWJtLWJ0cy1jbnBnLWJhd2VudC1jcDRiYS1idHMtcnc6NTQzMjpCVFNEQjpwb3N0Z3Jlc2FkbWluOmtUOW9FWE4wQUJaNWNDN3VMbllIODVOS29IUENnZ1ZsbmlZSUVUaWhIOXYxd1diT0RieWwweDBwRllzW
m1IVFEK
username: cG9zdGdyZXNhZG1pbg==
kind: Secret
metadata:
labels:
app.kubernetes.io/component: ibm-bts
app.kubernetes.io/instance: cp4ba-bts
app.kubernetes.io/name: ibm-bts-cp4ba-bts
k8s.enterprisedb.io/cluster: ibm-bts-cnpg-bawent-cp4ba-bts
k8s.enterprisedb.io/reload: "true"
name: ibm-bts-cnpg-bawent-cp4ba-bts-app
namespace: bawent
type: kubernetes.io/basic-auth

b. backup EDB database to AWS S3 bucket:

create secret for s3 bucket:

kubectl create secret generic s3-credentials \
--from-literal=ACCESS_KEY_ID= \
--from-literal=ACCESS_SECRET_KEY=

oc edit bts cp4ba-bts , add below backup section:

spec:
backup:
barmanObjectStore:
destinationPath: s3://bucket/bts/
s3Credentials:
accessKeyId:
key: ACCESS_KEY_ID
name: s3-credentials
secretAccessKey:
key: ACCESS_SECRET_KEY
name: s3-credentials

c. Create a Backup and apply it

apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Backup
metadata:
name: ibm-bts-cnpg-cp4ba-bts-backup
spec:
cluster:
name: ibm-bts-cnpg-cp4ba-bts

d. Check AWS bucket, there are backup files uploaded. See backup_on_AWS_s3_01.png backup_on_AWS_s3_02.png



2. On Target Environment:

a. create secret for s3 bucket:

kubectl create secret generic s3-credentials \
--from-literal=ACCESS_KEY_ID= \
--from-literal=ACCESS_SECRET_KEY=

b. Deploy the CP4BA environment.

c. Check status for CP4BA CR deployment , waiting for BTS getting ready and then add the recovery section

spec:
recovery:
barmanObjectStore:
destinationPath: s3://rtpsec/test/
s3Credentials:
accessKeyId:
key: ACCESS_KEY_ID
name: s3-credentials
secretAccessKey:
key: ACCESS_SECRET_KEY
name: s3-credentials

c. Check bts pod:

[root@api.rtpocp.cp.fyre.ibm.com backup]# oc get pods |grep bts
ibm-bts-cnpg-bawent-cp4ba-bts-1 1/1 Running 0 3h4m
ibm-bts-cp4ba-bts-316-deployment-6df76c57c4-4q4l5 1/1 Running 0 3h3m
[root@api.rtpocp.cp.fyre.ibm.com backup]#

d. Check log for Pod ibm-bts-cnpg-bawent-cp4ba-bts-1, it looks like something wrong in Postgresql: (See ibm-bts-cnpg-bawent-cp4ba-bts-1-postgres.txt)

{"level":"info","ts":1654253559.9495623,"logger":"postgres","msg":"record","logging_pod":"ibm-bts-cnpg-bawent-cp4ba-bts-1","record":{"log_time":"2022-06-03 10:52:39.949 UTC","user_name":"postgresadmin","database_name":"BTSDB","process_id":"317","connection_from":"10.254.19.11:56022","session_id":"6299e7f4.13d","session_line_num":"1","command_tag":"PARSE","session_start_time":"2022-06-03 10:52:36 UTC","virtual_transaction_id":"3/370","transaction_id":"0","error_severity":"ERROR","sql_state_code":"42P01","message":"relation \"meta_data\" does not exist","query":"SELECT VALUE FROM META_DATA WHERE PKEY = $1","query_pos":"19","application_name":"PostgreSQL JDBC Driver","backend_type":"client backend"}}





3. Environment information:

Target environment:
https://console-openshift-console.apps.rtpocp.cp.fyre.ibm.com kubeadmin / Gngt7-U9tV3-6cRSx-vtAHr , project: bawent

Source environment:
https://console-openshift-console.apps.rtpsec.cp.fyre.ibm.com kubeadmin / sExwr-767fi-LsMdm-U4Gma , project: bawent



Customer impact statement

The new feature of backup/recovery configured through the BTS operator is hard to use when following the documentation. Customer will fail frequently when trying to use it due to slight incompatibilities between different S3 storage providers.

Remediation plan
* Improve the documentation as soon as possible.
* Improve the features itself in next ifix.

CP4BA-22.0.1
One of two BTS-CNPG pods in CrashLoopBackOff under load

When I am doing the load performance testing, with 250 concurrent users 30sec thinktime, with large profile for BTS. First I can see 2 of 4 BTS pods are either in CrashLoopBackOff state or 0/1 state, with the workaround https://jsw.ibm.com/browse/DBACLD-49622?focusedCommentId=8384992&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-8384992, I can see all 4 BTS pods are all live 1/1 under load, however, I found one of two BTS-CNPG pod is in CrashLoopBackoff state under the load, and after I stop the load, this pod is keep in 0/1 state. 

image-2022-05-31-08-29-06-283.png

image-2022-05-31-08-30-16-326.png

And when I monitor the longest response time request for bts query, I found it tried to connect to db twice and failed to connect which caused the long response time, how can we resolve this BTS-CNPG issue under load? This workload is for our ADP large configuration requirement, with such unstable BTS-CNPG, the customer with large profile will encounter some kind of worse performance. We need fix or workaround to resolve this issue. Thanks.

image-2022-05-31-08-32-45-196.png



Customer impact statement

BTS continues to work correctly as long as one BTS-CNPG PostgreSQL DB pod runs. When some other BTS-CNPG PostgreSQL DB pods remain in crashed state for long time, this may impact the overall performance of the cloud pak installation due to wasted resources.

Remediation plan

The problem was reported to the IBM PostgreSQL team and will be discussed with the external EDB PostgreSQL team on the next EDB PostgreSQL interlock. We will document the steps to recover a crashed PostgreSQL pod when such instructions are provided from the EDB PostgreSQL team.

CP4BA-22.0.1
IBM Business Automation Workflow
Limitation Description
Version
If you use the case feature of Business Automation Workflow, BAW and CPE "replica" sizes must be set to 1.
Symptom:
There is a potential risk of a database deadlock on the Content Platform Engine side, and this deadlock causes the Business Automation Workflow and Content Platform Engine integration code to fail.
Solution:
Do not set the value for the BAW and CPE "replica" size to more than 1 in the CP4BA custom resource (CR) file. The issue is fixed in CP4BA 24.0.0 IF007
CP4BA-22.0.1

[{"Type":"MASTER","Line of Business":{"code":"LOB76","label":"Data Platform"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSBYVB","label":"IBM Cloud Pak for Business Automation"},"ARM Category":[{"code":"a8m0z0000001iUCAAY","label":"Operate-\u003EUMS Install\\Upgrade\\Setup"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"CP4BA-22.0.1"}]

Document Information

Modified date:
03 November 2025

UID

ibm16837187