Elasticsearch log data is not cleaned up

The Elasticsearch index data for the cluster logs (Logstash) and metrics (Heapster) are not removed from the management node.

Symptoms

The cluster log and metrics data that is stored in the /var/lib/icp/logging/elk-data directory of each management node where the ELK stack runs takes up excessive disk space.

Causes

By default, logs and metrics are stored for 1 day. A cron job runs every day at 23:30 to clean up the logs. If your computers are suspended at that time, the logs might accumulate.

If your management nodes use Red Hat Enterprise Linux, an incorrectly configured Docker storage driver can prevent the cron job from automatically running. This configuration can cause the logs to accumulate.

Additionally, if your containers generate a large amount of log or metric data, the storage capacity of your management nodes might be too small or the default log and metric storage intervals might be too long.

Resolving the problem

If your management node uses Red Hat Enterprise Linux, confirm that your Docker storage driver is configured correctly. Storage drivers must be configured before you install IBM® Cloud Private.

If your containers generate a large amount of log or metric data, either increase the storage capacity of your management nodes or modify the default log and metric curator criteria by following the instructions in the Data retention section of IBM Cloud Private logging page.