Logging considerations

By default, logging of the capabilities is enabled in your cluster and the logs are stored in a dedicated persistent data store. You can also collect and forward the standard output stdout logs from the containers to help you troubleshoot issues and improve their health and performance.

The following information must be considered before you install Cloud Pak for Business Automation.

Logs and log persistence

All containerized applications write to the standard output and standard error streams, which can be viewed on a pod level by using oc logs on the command line. To view them in the Red Hat OpenShift Container Platform (OCP) web console, click Workload > Pod > Pod Details > Logs.

However, if a container crashes, a pod is evicted, or a node dies, you probably still want to access the application logs. Therefore, logs need a separate storage location and a lifecycle that is independent of nodes, pods, or containers. OCP provides a logging solution that is based on the EFK stack: Elasticsearch, Fluentd, and Kibana. Fluentd collects all the node and container logs and stores them in dedicated project indexes. Kibana is the centralized user interface where you can create visualizations and dashboards with the aggregated data.

Most components can be configured to write logs to a persistent volume (PV). Logs can be stored under a /logs/application directory on the PV.

The following table shows which containers produce and persist logs.

Table 1. Logging information for capability components
Component Does it produce logs Can it persist logs
FileNet Content Manager
cpe Yes Yes (dedicated log PVC cpe-logstore-pvc).

For more information, see Must gather.

cmis Yes Yes (dedicated log PVC cmis-logstore-pvc).

For more information, see Must gather.

graphql Yes Yes (dedicated log PVC graphql-logstore-pvc).

For more information, see Must gather.

Task Manager (tm) Yes Yes (dedicated log PVC tm-logstore-pvc).

For more information, see Must gather.

ier Yes Yes (dedicated log PVC ier-pvc).

For more information, see Must gather: Collecting data to diagnose issues with Enterprise Records.

css Yes Yes (dedicated log PVC css-logstore-pvc).

For more information, see Must gather.

iccsap Yes Yes (dedicated log PVC iccsap-logstore-pvc).

For more information, see Must gather.

Navigator
icn Yes For more information, see Must gather.
Automation Document Processing
cds Yes Yes (dedicated log PVC cds-logstore-pvc).
cpds Yes Yes (dedicated log PVC cpds-logstore-pvc).
cdra Yes Yes (dedicated log PVC cdra-logstore-pvc).
viewone Yes Yes (dedicated log PVC viewone-logstore-pvc).
gitgateway Yes Not persisted.

For more information, see Must gather.

Document Processing engine Yes Yes (By default, logs go to the folder stdout, but dedicated log PVC is used if PVC name is configured in the CR parameter ca_configuration.global.logs.claimname. For more information, see Document Processing engine parameters).
Business Automation Studio
bastudio Yes Yes (dedicated log PVC bastudio-logstore-pvc).

For more information, see Must gather.

jms Yes
job - db init Yes

For more information, see Must gather.

job - oidc registration No Not persisted.
job - ipta creation No Not persisted.
Business Automation Application
application engine (ae) Yes For more information, see Must gather.
job - ae db Yes
resource registry (rr) Yes For more information, see Must gather.
rr setup pod Yes
job - ae oidc registration No Not persisted.
Automation Decision Services
credentials service Yes Yes (dedicated log PVC). The PVC logs are stored in the "ADS/<pod name>" folder.
git service Yes
parsing service Yes
rest api Yes
run service Yes
runtime service Yes
embedded mongo No Not persisted.
job - resource registry registration No Not persisted.
job - bai registration No Not persisted.
ads-designer-zen-translation-job No Not persisted.
ads-runtime-zen-translation-job No Not persisted.
Business Automation Workflow
Workflow Authoring/Runtime Yes Yes (dedicated log PVC baw-logstore-pvc).

For more information, see Must gather.

Process Federation Server Yes Yes (dedicated log PVC pfs-logs-pvc).
JMS Yes For more information, see Customizing Liberty server trace setting.
Elastic Search (embedded) No Not persisted.
Machine Learning - workforce insights Yes Yes (dedicated log PVC baml-wfi-logstore-pvc).
Machine Learning - intelligent task priority Yes Yes (dedicated log PVC baml-itp-logstore-pvc).
job - baw db init Yes Not persisted.
job - pfs db init Yes Not persisted.
job - content init Yes Not persisted.
job - case init Yes Not persisted.
job - workplace init Yes Not persisted.
job - bas auto import Yes Not persisted.
job - oidc job Yes Not persisted.
job - oidc job for webpd Yes Not persisted.
job - ltpa init Yes Not persisted.
Business Automation Insights
BPC Yes Not persisted.

For more information, see Must gather.

Flink job/task managers Yes Not persisted.

For more information, see Must gather.

Other components Yes Not persisted.

For more information, see Must gather.

Operational Decision Manager
decisionserver-runtime Yes

For more information, see Configuring logging.

decisionserver-console
decisioncenter
decisionrunner
Workflow Process Service Authoring
Workflow Process Service Authoring Yes Yes (dedicated log PVC bastudio-logstore-pvc).

For more information, see Must gather.

jms Yes Yes
job - db init Yes Yes

For more information, see Must gather.

job - ltpa creation No Not persisted.
Workflow Process Service Runtime
Workflow Process Service Runtime Yes Optional. For more information, see Must gather.

Content Manager and Document Processing can also use the following logging configuration parameters to further customize how the logs are stored.

  # # Logging setting
  # logging_configuration:
  #   mon_log_parse: false
  #   mon_log_service_endpoint: localhost:5044
  #   private_logging_enabled: false
  #   logging_type: default
  #   mon_log_path: /path_to_extra_log

For more information, see Logging parameters.

You can customize the logs for all of the other capabilities that are written to stdout by referring to the configuration parameters for the capability. For more information, see Certified Kubernetes configuration parameters.

Automation Decision Services for example, can use the following parameters to change which logs are stored on the shared PV.
existing_pvc_for_logstore
If a value is specified, the logs are sent to the PV defined in the specified PVC name. If no value is specified, the default shared PVC is used.
decision_designer.mount_pvc_for_logstore
If set to false, the logs of the designer (including embedded runtime) are not stored in the PVC and only sent to stdout.
decision_runtime.mount_pvc_for_logstore
If set to false, the logs are not stored in the PVC and only sent to stdout.

Java dumps

Remember: Java dumps cannot be generated for Node.js based applications.

For more information about collecting data in Cloud Pak for Business Automation, see the Must gather.

  1. Get a shell to a running pod where you want to generate a dump:
    oc exec -it <running pod> bash

    Dumps are created in the "/config/dumps" directory or the /opt/ibm/wlp/output/defaultServer/ directory.

  2. Change directory to the ../wlp/bin directory:
    cd /opt/ibm/wlp/bin

    For components that have no PVC to store dumps, copy the dumps locally by using the following command:

    oc cp <pod name>:/config/dumps/<javacore or heapdump file name> <javacore or heapdump file name>
  3. Determine whether the dump storage is attached to the pod by running the following command:
    df | grep /opt/ibm/wlp/output/defaultServer/dump
    Note: If a BAW and PFS Liberty server JVM crashes, then you also see dumps generated.

    Get the dumps by tarring the files in the dump store persistent volume (baw/pfs-dumpstore-pvc). You can find the path by reviewing the get pvc output. All server pods send dumps to the same persistent volume. A sub-directory for each pod is created by using the pod name.

  4. Determine the log volume where logs are being persisted for the current pod and make sure you can write a test file:
    ls /opt/ibm/wlp/usr/servers/defaultServer/logs/<pod name>
  5. Run the touch command:
    touch /opt/ibm/wlp/usr/servers/defaultServer/logs/<pod name>
  6. Execute dump command and specify the log path for the server dump zip file:
    server dump defaultServer --archive=/opt/ibm/wlp/usr/servers/defaultServer/logs/<pod name>/package_file_name.dump.zip --include=heap
    Expect the following output:
    Dumping server defaultServer.
    Server defaultServer dump complete in /opt/ibm/wlp/usr/servers/defaultServer/logs/<pod_name>/package_file_name.dump.zip.
  7. For Flink job/task managers, you can retrieve the PID of the Java process by running the following command:
    oc exec -it <CR_NAME>-bai-event-xxx-eve-yyy-ep-job/taskmanager-xxx -- bash -c "ps -ef | grep java"
    Create the thread dump:
    oc exec -it <CR_NAME>-bai-event-xxx-eve-yyy-ep-job/taskmanager-xxx -- bash -c "kill -3 <PID>"
    Retrieve the file name of the dump:
    oc exec -it <CR_NAME>-bai-event-xxx-eve-yyy-ep-job/taskmanager-xxx -- bash -c "ls -ltr"
    Copy the file locally:
    oc cp <pod name>:<filename> <file name>