Enabling and using logging
IBM Cloud Pak® for Integration supports both OpenShift cluster logging and user-defined logging solutions. For information on system requirements, see Cluster logging requirements.
You can access the logs for running pods, which are available in the OpenShift web console, without installing a logging solution. However, these logs are not saved when a pod stops running or is deleted. To make logging data persistent, you need to need to install a logging solution for your OpenShift Container Platform cluster.
Installing OpenShift cluster logging Configuring a custom logging solution Auditing user activity
Installing OpenShift cluster logging
For information about system requirements, see Cluster logging requirements.
To use Red Hat OpenShift cluster logging as your logging solution, begin by following the procedure in Installing logging in the Red Hat OpenShift documentation.
The next section, "Configuration examples for the ClusterLogging custom resource" highlights some common settings and use cases.
Configuration examples for the ClusterLogging custom resource
The following YAML examples are settings that you can use when deploying the ClusterLogging
custom resource. For more information, see Configuring CPU and memory limits and Configuring the log collector in the Red Hat OpenShift documentation.
- Minimal install
If your use case is a proof-of-concept where data loss or log loss is not a concern, and the OpenShift cluster has limited resources, you can run a single node Elasticsearch cluster. Update the
redundancyPolicy
toZeroRedundancy
and thenodeCount
to1
in the following snippet. If the cluster has no persistent storage and you still want to test the logging setup, you can set the storage to empty.apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 1 storage: {} redundancyPolicy: ZeroRedundancy
- Specifying a storage class
In the
spec
section, configure the storage value for your OpenShift cluster. The following example is configured for theibmc-block-gold
RWO storage class:apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy storage: size: 200G storageClassName: ibmc-block-gold resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi
- Deploying components individually
If you do not want to deploy all the components of the OpenShift cluster logging resource, you can install the ones that you want, individually. This example deploys only the
fluentd
collector:apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: fluentd fluentd: {} managementState: Managed
Accessing OpenShift cluster logging in the Platform UI
Log in to IBM Cloud Pak Platform UI.
In the list of instances, find the desired instance.
Click the overflow menu (three-dot icon), then click Logs.
By default, no index patterns are created, and therefore kibana does not show any logs from the instance. To get the logs, create an index pattern by entering
app-*
in the text field.
Enabling external access to OpenShift cluster logging
If you want to send logs from OpenShift cluster logging to a third-party application, these are the minimum tasks required for setup:
Install and configure OpenShift cluster logging
Enable external access to OpenShift cluster logging
Configure log forwarding
After you have installed OpenShift cluster logging, add a route to expose OpenShift cluster logging data to applications that are outside the OpenShift cluster.
Extract the CA Certificate:
oc extract secret/elasticsearch --to=. --keys=admin-ca -n openshift-logging
Create a route file called
es_route.yaml
that contains this YAML:apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: |
Add the CA certificate content to the YAML file and create the route by running the following:
cat ./admin-ca | sed -e "s/^/ /" >> es_route.yaml oc create -f es_route.yaml
Confirm that the route is working as expected:
token=$(oc whoami -t) routeES=`oc get route elasticsearch -n openshift-logging -o jsonpath={.spec.host}` curl -sS -k -H "Authorization: Bearer ${token}" "https://${routeES}/_cat/health"
Next, complete the process by configuring log forwarding.
Configuring log forwarding
After you enable external access to OpenShift cluster logging, create an OpenShift cluster log forwarder so that you can forward logs to a third-party application. For instructions, see About forwarding logs to third-party systems in the Red Hat OpenShift documentation.
Configuring a custom logging solution
If you are using a custom logging solution instead of OpenShift cluster logging, add and configure the loggingUrl
parameter of the PlatformNavigator
custom resource. This configuration links the Platform UI to the logging stack. For a configuration example, see "Custom resource values" in Using the Platform UI.
When the configuration is successful, you can access persistent logging for each instance provisioned in the Platform UI. In the Instances section of the Platform UI, click the overflow menu in the row for a given instance, then click Logs.
Auditing user activity
Auditing logs is useful for monitoring user operations within your IBM Cloud Pak for Integration deployment, such as create, read, update, delete, login, and logout activity.
The logs for users actions in the Platform UI are in the services container in the Platform UI pod. These logs are assigned a unique prefix ([USER-LOG]
) so they can be captured and audited.
For more information about available audit logging for instances, see the following documentation: