For LINUX platforms

Analyzing application Logs on Red Hat OpenShift Container Platform with Loki, Vector, and the RHOCP Cluster Observability Operator

You can use Logging 6.0 with Loki, Red Hat® OpenShift® Logging, and Cluster Observability operators to manage log storage, collection, and visualization on Red Hat OpenShift Container Platform 4.20.

Pod processes running in Kubernetes frequently produce application logs. To effectively manage the application log data and avoid loss of log data occurs when a pod stops, deploy log aggregation tools on the Kubernetes cluster. Log aggregation tools help you persist, search, and visualize the log data that is gathered from the pods across the cluster. For more information on the new changes that are introduced in Logging 6.0, see Upgrading to Logging 6.0.

Setting up LokiStack and Cluster Log Forwarding

To set up LokiStack and the ClusterLogForwarder in the openshift-logging namespace, see Logging 6.0.

After the LokiStack and the ClusterLogForwarder deployments are completed, the following pods run in the openshift-logging namespace:
logging-loki-compactor-0                       1/1     Running   0          3h45m
logging-loki-distributor-74cb8f8854-5fr9t      1/1     Running   0          3h45m
logging-loki-distributor-74cb8f8854-p88cx      1/1     Running   0          3h45m
logging-loki-gateway-78888d6c56-428w9          2/2     Running   0          3h45m
logging-loki-gateway-78888d6c56-6wl5b          2/2     Running   0          3h45m
logging-loki-index-gateway-0                   1/1     Running   0          3h45m
logging-loki-index-gateway-1                   1/1     Running   0          3h45m
logging-loki-ingester-0                        1/1     Running   0          3h45m
logging-loki-ingester-1                        1/1     Running   0          3h44m
logging-loki-querier-57c8bd8c75-vcc4t          1/1     Running   0          3h45m
logging-loki-querier-57c8bd8c75-wwlrd          1/1     Running   0          3h45m
logging-loki-query-frontend-6bbb599859-rmsmk   1/1     Running   0          3h45m
logging-loki-query-frontend-6bbb599859-zhcfj   1/1     Running   0          3h45m
cluster-logging-operator-7b6bc9c48-2mjx2       1/1     Running   0          4h22m
collector-59dxs                                1/1     Running   0          3h16m
collector-7p6h7                                1/1     Running   0          3h16m
collector-jbwkn                                1/1     Running   0          3h16m
collector-ntzm4                                1/1     Running   0          3h16m
collector-vzlm4                                1/1     Running   0          3h16m
collector-z75z5                                1/1     Running   0          3h16m

For more information on configuring LokiStack storage, see Storing logs with LokiStack. For more information on configuring log forwarding, see Configuring log forwarding.

Accessing object storage

You must have existing object storage to configure LokiStack. Loki Operator supports AWS S3, Azure, GCS, Minio, OpenShift Data Foundation, and Swift as options for LokiStack object storage.

Create a logging-loki-s3 secret inside the openshift-logging namespace that contains the fields that are needed for LokiStack to access your object storage. The following sample command creates a secret that allows access to an OpenShift Data Foundation managed S3 bucket that runs inside the cluster.
  oc create secret generic logging-loki-s3 \
    --from-literal=access_key_id=key \
    --from-literal=access_key_secret=secret \
    --from-literal=bucketnames=bucket-name \
    --from-literal=endpoint=endpoint \
    --namespace openshift-logging

Parsing JSON container logs

By default, if container logs are being output in JSON format, they are nested inside the Vector JSON document's message field. To solve this problem, add the following YAML to the ClusterLogForwarder custom resource:
kind: ClusterLogForwarder
 spec:
   ...
   filters:
   - name: parse-json
     type: parse
   ...
   pipelines:
   - name: default-logstore
     ...
     filterRefs:
     - parse-json

The nested JSON container log in to a separate structured field inside the Vector JSON document, where the individual fields from the JSON container log can be accessed as structured. <field_name>.

Visualizing your logs by using the Cluster Observability Operator's Logging UI plug-in

Add the label logFormat: liberty to your WebSphereLibertyApplication custom resource. This label filters through the application pod logs in Observe > Logs.
apiVersion: liberty.websphere.ibm.com/v1
kind: WebSphereLibertyApplication
metadata:
  name: <your-liberty-app>
  labels:
    logFormat: liberty

In the OpenShift Container Platform web console, go to Observe > Logs to view the application logs.

The following image shows the Logs page where you can view the application logs.

Imported Red Hat OpenShift dashboard page that shows logs in the OpenShift Container Platform UI plug-in
Use the following LogQL query to filter for and format your application pod logs.
{ log_type="application" } | json | kubernetes_labels_logFormat="liberty" | line_format "[{{.structured_loglevel}}] {{.structured_message}}"

The following image shows the application logs filtered by the LogQL query.

Imported Red Hat OpenShift dashboard page that shows logs that are queried using LogQL

To see other structured.* formatted individual fields from your application pod log, expand a log entry.

The following image shows the expanded application log entries. Imported Red Hat OpenShift dashboard page that shows the expanded logs

You can now ingest, forward, and view your application logs by using LokiStack, Vector, and the Cluster Observability Operator's Logging UI plug-in.