Using cluster logging

IBM Cloud Pak for Integration supports both Red Hat OpenShift cluster logging and user-defined logging solutions.

For running pods, you can use the logs that are available in the OpenShift console. To persist logging data you need to install a logging solution for your cluster.

Installing Red Hat OpenShift cluster logging

To install OpenShift cluster logging, begin by following the procedure in Installing cluster logging in the Red Hat OpenShift documentation.

Configuring the custom resource for cluster logging

This section offers some guidance on common settings for cluster logging in Cloud Pak for Integration. For detailed guidance see Red Hat OpenShift cluster logging.

Minimal install: If this is for a proof-of-concept where data loss or log loss is not a concern, and the cluster has limited resources, you can run a single node Elasticsearch cluster. To do this, update the redundancyPolicy to ZeroRedundancy and the nodeCount to 1 in the snipped below.

If the cluster does not have a persistent storage and you still want to test the logging setup, you can set the storage to empty as per the snippet below.

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
    name: instance
    namespace: openshift-logging
spec:
    logStore:
        type: "elasticsearch"
        elasticsearch:
            nodeCount: 1
            storage: {}
            redundancyPolicy: ZeroRedundancy

Example custom resource: This is an example ClusterLogging custom resource snippet for deploying cluster logging using the ibmc-block-gold RWO storage class:

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
    name: instance
    namespace: openshift-logging
spec:
    collection:
        logs:
        fluentd: {}
        type: fluentd
    curation:
        curator:
        schedule: 30 3 * * *
        type: curator
    logStore:
        elasticsearch:
            nodeCount: 3
            redundancyPolicy: SingleRedundancy
            storage:
                size: 200G
                storageClassName: ibmc-block-gold
        retentionPolicy:
            application:
                maxAge: 7d
            infra:
                maxAge: 7d
            audit:
                maxAge: 7d
        type: elasticsearch
    managementState: Managed
    visualization:
        kibana:
            replicas: 1
        type: kibana

Deploying components individually: If you do not want to deploy all the components of the OpenShift cluster logging resource, you can install the ones you want individually. For example, this snippet allows you to deploy only the fluentd collector:

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
    name: instance
    namespace: openshift-logging
spec:
    collection:
        logs:
        fluentd: {}
        type: fluentd
    managementState: Managed

Verifying cluster logging

Verify that the cluster IP address of the service is working with this command (replace the placeholders with the applicable pod name and cluster IP address). Make sure you are logged into the OCP cluster with a token:

token=$(oc whoami -t)
oc exec <THE_POD_NAME> -n openshift-logging -- curl -sS -k -H "Authorization: Bearer ${token}" https://<THE_IP_CLUSTER_ADDRESS>:9200/_cat/health

You should get output similar to this example:

1611854452 17:20:52 elasticsearch green 3 3 414 207 0 0 0 0 - 100.0%

Accessing logging from the Platform Navigator

  1. Open the Platform Navigator.

  2. Navigate to the instance view that lists the instances for which you need to access logging: Creation form yaml

  3. Click Logs.

  4. By default, no index patterns are created, and therefore kibana does not show any logs from the instance. To get the logs, create an index pattern of app-*.

Exposing cluster logging

  1. Extract the CA Certificate using oc extract secret/elasticsearch --to=. --keys=admin-ca -n openshift-logging

  2. Create a route file called es-route.yaml with this snippet:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
    name: elasticsearch
    namespace: openshift-logging
spec:
    host:
    to:
        kind: Service
        name: elasticsearch
    tls:
        termination: reencrypt
        destinationCACertificate: |
  1. To add the CA certificate content to the YAML file and create the route, run:

cat ./admin-ca | sed -e "s/^/      /" >> esroute.yaml
oc create -f esroute.yaml
  1. Check that the route is working as expected:

token=$(oc whoami -t)

routeES=`oc get route elasticsearch -n openshift-logging -o jsonpath={.spec.host}`

curl -sS -k -H "Authorization: Bearer ${token}" "https://${routeES}/_cat/health"

Install the cluster log forwarder

For detailed guidance on how to setup a log forwarder, see Red Hat OpenShift cluster logging external.

Using a custom logging solution

When using a custom logging solution, configure the loggingUrl parameter of the Platform Navigator. This allows the deployment interface to link to the logging stack in the UI. For more information, see Custom resource values.

Once the configuration is successful, you can access persistent logging by clicking Logs for each operand provisioned in the platform. Operands (installed instances of operators) can be found in their respective instance overflow menus, which are accessed from the common header menu after you click Integration capabilities or Integration runtimes in the navigation menu.