Analyzing application Logs on Red Hat OpenShift Container Platform with Elasticsearch, Fluentd, and Kibana
You can deploy the open source Elasticsearch, Fluentd, and Kibana stack on a Kubernetes cluster to aggregate application logs on Red Hat® OpenShift® Container Platform V12 and analyze these logs on the Kibana dashboard.
Pod processes running in Kubernetes frequently produce application logs. To effectively manage the application log data and ensure that no loss of log data occurs when a pod stops, deploy log aggregation tools on the Kubernetes cluster. Log aggregation tools help you persist, search, and visualize the log data that is gathered from the pods across the cluster.
The following information describes how to deploy the Elasticsearch, Fluentd, and Kibana (EFK)
stack by using the Elasticsearch Operator and the Red Hat OpenShift logging operator. Use this preconfigured
EFK stack to aggregate all container logs. After a successful installation, the EFK pods exist
inside the openshift-logging
namespace of the cluster. You can view the application
log data on the Kibana dashboard.
Installing Red Hat OpenShift Container Platform logging
- Install the Red Hat OpenShift logging component.
Ensure that you set up storage for Elasticsearch through persistent volumes. When you deploy the
.yaml
file for the Red Hat OpenShift logging instance, the Elasticsearch pods that are created automatically search for persistent volumes to bind to. If no persistent volumes are available, the Elasticsearch pods are stuck in a pending state.In-memory storage is also possible when you remove the storage definition from the Red Hat OpenShift logging instance from the
.yaml
file, but this in-memory storage is not suitable for production. - Verify that the installation completes without any errors and that you see the Red Hat OpenShift logging, Elasticsearch, Fluentd, and
Kibana pods are running in the
openshift-logging
namespace. The number of pods that are running for each of the EFK components varies depending on the configuration that is specified in theClusterLogging
custom resource (CR). The following example shows the pods that are running in theopenshift-logging
namespace.oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-874597bcb-qlmlf 1/1 Running 0 150m curator-1578684600-2lgqp 0/1 Completed 0 4m46s elasticsearch-cdm-4qrvthgd-1-5444897599-7rqx8 2/2 Running 0 9m6s elasticsearch-cdm-4qrvthgd-2-865c6b6d85-69b4r 2/2 Running 0 8m3s collector-rmdbn 1/1 Running 0 9m5s collector-vtk48 1/1 Running 0 9m5s kibana-756fcdb7f-rw8k8 2/2 Running 0 9m6s
The Red Hat OpenShift logging instance also exposes a route for external access to the Kibana console, as shown in the following example.
oc get routes -n openshift-logging
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
kibana kibana-openshift-logging.apps.host.kabanero.com kibana <all> reencrypt/Redirect None
Parsing JSON container logs
In Red Hat OpenShift, the Red Hat OpenShift logging Fluentd collectors capture the application container logs and puts each log in a message field of a Fluentd JSON document as a string. If you output the logs in JSON format, they are nested in the message field of the Fluentd JSON document. To use the JSON log data inside a Kibana dashboard, the individual fields inside the nested JSON logs must be parsed.
You can parse these nested JSON application container logs by deploying a Cluster Log Forwarder instance. The deployed Cluster Log
Forwarder instance copies the nested JSON logs in to a separate structured
field
inside the Fluentd JSON document. The individual fields from the JSON container log can be accessed
in the structured.<field_name>
format.
Different products or applications can use the same JSON field names to represent different data
types. To avoid conflicting JSON fields, the Cluster Log Forwarder instance requires JSON container
logs from different products or applications to be separated into unique indexes. In the following
instructions, the ClusterLogForwarder
CR creates these unique indexes by using a
label that is attached to the service of your application.
- Add the
logFormat: liberty
label to yourWebSphereLibertyApplication
CR. The Cluster log Forwarder instance uses this label later to create a unique index for the application logs of the container.kind: WebSphereLibertyApplication apiVersion: v1 metadata: name: <your-liberty-app> labels: logFormat: liberty ....
- Restart your application deployment to include the updated label in the service and pod of your application.
- Create the following cluster-logging-forwarder.yaml file to configure a
Cluster log Forwarder instance that parses your JSON container
logs.
apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: namespace: openshift-logging name: instance spec: inputs: - name: liberty-logs application: namespaces: - liberty-app #modify this value to be your own app namespace outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat structuredTypeName: nologformat outputs: pipelines: - name: parse-liberty-json inputRefs: - liberty-logs outputRefs: - default parse: json
The .yaml file creates a parse-liberty-json
pipeline for
the ClusterLogForwarder
kind. This pipeline takes an input from the
liberty_logs
input reference all the container logs from the
liberty-app
namespace. The pipeline outputs the container logs to the Red Hat OpenShift default Elasticsearch log store for the
cluster. The parse: json
definition enables the JSON log parsing.
The configured outputDefaults.elasticsearch.structuredTypeKey
parameter builds a
unique index for the container logs by adding the app-
prefix to the
logFormat
label in the container. Previously, the logFormat:
liberty
label was added to the service of your WebSphereLibertyApplication
CR. Therefore, the log files that are forwarded to the Elasticsearch default log store follow the
app-liberty-*
index pattern. If no logFormat
label exists in your
application container, the outputDefaults.elasticsearch.structuredTypeName
parameter provides a fallback index name.
oc create -f cluster-logging-forwarder.yaml
For more information about parsing JSON logs, see the Red Hat OpenShift guide on enabling JSON logging.
Viewing application logs in Kibana
- View the Kibana dashboard by using the Kibana route URL.Run the following command to get the Kibana route URL.
oc get routes -n openshift-logging
- Log in to the Kibana dashboard with your Kubernetes user ID and password.
The browser redirects you to
on the Kibana dashboard. - For the index pattern field, enter the
app-liberty-*
value to select all the Elasticsearch indexes used for your application logs.The following image shows the Create index pattern page where you enter the index value.
- Click Discover to view the application logs that are generated by the
deployed application and that include the
app-liberty-*
value in the file name.The following image displays the application logs for the
app-liberty-*
value. - Expand an individual log file entry to see the
structured.*
formatted individual fields, parsed and copied out of the nested JSON log entry.The following image displays these
structured.*
formatted individual fields.
- Problem dashboard, which visualizes message, trace and FFDC
information for Liberty
servers
Use this dashboard to look for errors, warnings, and other problems.
- Traffic dashboard, which visualizes access log information for Liberty serversThe traffics dashboard depends on the default access log fields. Include the default set of access log fields in your access log format.
%h %H %A%B %m %p %q %{R}W %s %U
- Click .
- Select the dashboard file and click the Yes, overwrite all.
- Click and view the log files.
The following image displays the logs on an imported Kibana dashboard.
Configuring and uninstalling Red Hat OpenShift logging
To change the installed EFK stack, edit the ClusterLogging
CR of the deployed
Red Hat OpenShift logging instance.
To uninstall the EFK stack, remove the Red Hat OpenShift logging instance from the Red Hat OpenShift Logging Operator Details page.