Overview

Management logging service deploys an ELK stack to collect and store all Docker-captured logs. Numerous options are available to customize the stack before you install your product, including end-to-end TLS encryption. You can deploy and customize more ELK stacks from the catalog, or deploy other third-party solutions, offering maximum flexibility to manage your logs.

The management logging service offers a wide range of options to configure the stack to suit your needs:

ELK

ELK is an abbreviation for three products, Elasticsearch, Logstash, and Kibana, all developed by Elastic Opens in a new tab. Together they comprise a stack of tools that stream, store, search, and monitor data, including logs. A fourth Elastic component that is named Filebeat is deployed to stream the logs to Elasticsearch.

Docker integration

Every node in the cluster must configure Docker to use the JSON file driver Opens in a new tab. Docker streams the stdout and stderr pipes from each container into a file on the Docker host. For example: if a container has Docker ID abcd, the default location for some platforms to store output from the container is /var/lib/docker/containers/abcd/abcd-json.log. Your product logging chart deploys a Filebeat daemon set to every node to stream the JSON log files into the ELK stack.

Kubernetes adds its own layer of abstraction on each container log. Under the default path, /var/log/containers it creates a symlink that points back to each Docker log file. The symlink file name contains extra Kubernetes metadata that can be parsed to extract four fields:

                   |   1    |   2   |    3    |                               4                                |
/var/log/containers/pod-abcd_default_container-5bc7148c976a27cd9ccf17693ca8bf760f7c454b863767a7e47589f7d546dc72.log
  1. The name of the pod to which the container belongs (stored as kubernetes.pod)
  2. The namespace into which the pod was deployed (stored as kubernetes.namespace)
  3. The name of the container (stored as kubernetes.container_name)
  4. The container's Docker ID (stored as kubernetes.container_id)

Processing logs

Filebeat

Filebeat is a lightweight shipper of log data. A Filebeat daemonset is deployed and runs on every node in your product logging. Filebeat monitors log files, collects log events, and forwards them to Logstash.

Logstash

Logstash performs two roles. First, it buffers the data between Filebeat and Elasticsearch. This buffering protects against data loss and reduces the volume of traffic to Elasticsearch. Its second role is to further parse the log record to extract metadata and make the data in the record more searchable. The following the default steps are taken by the ibm-icplogging Logstash pod:

  1. Parse the log record's datestamp (stored by Docker at the time it was expressed by the container).
  2. Extract the container's name, namespace, pod ID, and container ID into individual fields.

Note: JSON parsing is not supported.

The record is then stored briefly before Logstash sends it to Elasticsearch.

Elasticsearch

When a log record is sent to Elasticsearch, it becomes a document. Each document is stored within a named group that is called an index. When Logstash sends a record to Elasticsearch, it assigns it to an index with the pattern logstash-<YYYY>-<MM>-<dd>. Assigning each record to an index named after the day in which it was submitted makes it easier to track log retention policies.

Elasticsearch itself runs independently across three different pod types. Many other configurations are possible. This is the configuration that is chosen in the ibm-icplogging Helm chart.

Kibana

Kibana provides a browser-friendly query and visualization interface to Elasticsearch. It can be optionally excluded from deployment, although this is not recommended as Kibana is the default tool through which logs can be searched.

Post-deployment notes

Viewing and querying logs

Kibana is the primary tool for interfacing with logs. It offers a Discovery view, through which you can query for logs that meet specific criteria. It is possible to collate logs through this view by using one or more of the fields that are automatically added by the ibm-icplogging ELK stack.

You might need to query logs based on other criteria that is not discoverable by the ELK stack. For example, middleware product, application name, or log level. To get the most accuracy from application logs, consider JSON formatted output. JSON declares the names of the values in the log file rather than anticipating Elasticsearch to parse it accurately. The Filebeat daemon set that is deployed by the ibm-icplogging Helm chart is preconfigured to parse JSON-formatted log entries and set the values so they are searchable as high-level elements in Elasticsearch.

Note: Exceptions in Logstash observed in IBM® Z platforms

Exceptions that resemble the following messages are visible in Logstash. The exceptions do not affect the log ingestion from Filebeat to the Kibana UI.

[2019-07-07T12:11:35,834][INFO ][org.logstash.beats.BeatsHandler] [local: 10.1.79.146:5044, remote: 10.1.79.173:48148] Handling exception: error:1e00007d:Cipher functions:OPENSSL_internal:INVALID_NONCE
[2019-07-07T12:11:35,834][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

javax.net.ssl.SSLException: error:1e00007d:Cipher functions:OPENSSL_internal:INVALID_NONCE

at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:895) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:882) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.wrap(ReferenceCountedOpenSslEngine.java:824) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]

Elasticsearch APIs

Elasticsearch has a high degree of flexibility and a thoroughly documented API. Secure installation of the ELK stack restricts API access to internal components that use mutual authentication over TLS as described in preceding sections. Therefore, external access to Elasticsearch data is only available to users that are authenticated through Kibana. You can also use the dev tools panel in the Kibana user interface to access the Elasticsearch API. If more ELK stacks are deployed in standard mode, Kibana access is not protected by your product authentication or authorization controls.

Note: These APIs work only to query or operate on data that is presently tracked in the Elasticsearch data store. They do not affect backups.