Elasticsearch configuration and Explorer reports
Use the information in these troubleshooting and FAQs to help with configuring and monitoring Elasticsearch, which is used for Explorer reports in IBM® Spectrum Symphony Advanced Edition on Linux®. Any troubleshooting and FAQs specific to Explorer reports are also under this category.
Troubleshooting Elastic Stack
When the
elk-elasticsearch service starts in the cluster, it dynamically determines the
minimum eligible nodes required (calculated as MN/2+1). If you plan to remove
managements hosts in your cluster (which in turn updates the minimum eligible nodes), ensure that
you stop the elk-elasticsearch service before proceeding to remove the management
host. To allow for the loss of one management host, your cluster must have a minimum of three
eligible primary
nodes.
For more information, see the Zen Discovery Elasticsearch reference.
Error state or remains in the TENATIVE state.- For information about important configurations, what to monitor, and how to diagnosis and
prevent problems, see the following Elastic documentation:
- Monitoring
- Cluster HealthTip: A red cluster indicates that at least one primary shard and all of its replicas are missing. As a result, the data in that shard is not available, searches return partial results, and indexing into that shard return errors.
- Monitoring Individual Nodes to troubleshoot each node. Identify the troublesome indices and determine why the shards are not available. Check the disks or review the logs for errors and warnings. If the issue stems from node failure or hard disk failure, take steps to bring the node online.
- cat API to view cluster statistics.
- Based on the type of error you encounter, refer to the appropriate Elastic Stack log file.
Table 1. Elastic Stack log files Log file Default log location - Elastic Stack manager service log
- Standard out or error log.
$EGO_TOP/integration/elk/log/manager-[out|err].log.* - Elasticsearch service log
- Standard out or error log for the primary, client, or data service.
$EGO_TOP/integration/elk/log/es-[out|err].log.[master|client|data].* - Elasticsearch runtime log
- Runtime log for the primary, client, or data service.
$EGO_TOP/integration/elk/log/elasticsearch/*.log.[master|client|data]_* - Logstash (indexer) service log
- Standard out or error log.
$EGO_TOP/integration/elk/log/indexer-[out|err].log.* - Logstash (indexer) runtime log
- Runtime log.
$EGO_TOP/integration/elk/log/logstash/logstash-plain.log.* - Filebeat (shipper) service log
- Standard out or error log.
$EGO_TOP/integration/elk/log/shipper-[out|err].log.* - Filebeat (shipper) runtime log
- Runtime log.
$EGO_TOP/integration/elk/log/filebeat/filebeat.log.* - Resolve any of the following problems that might occur:
- Out of memory exception or Java heap size reached
- The default Elasticsearch installation uses 10 GB heap for the Elasticsearch services and 4 GB for Logstash service, which satisfies the 24 GB of RAM for IBM Spectrum Symphony system requirements. If your hosts have more than 24 GB memory and you need to increase the heap such as for system performance reasons, you can increase the Elasticsearch and Logstash heap sizes in IBM Spectrum Symphony. For more information about increasing the heap, see Tuning the heap sizes for Elasticsearch to accommodate heavy load.
- Disk full or watermark is reached
- The Elasticsearch service can remain in the TENTATIVE state when it reaches the limitations that are defined in the Elasticsearch watermark parameters.
- Red cluster or UNASSIGNED shards
- The Elasticsearch service can remain in the TENTATIVE state when at least one primary shard and all its replicas are missing.
Troubleshooting Explorer reports
Follow this high-level troubleshooting section to isolate and resolve problems with Explorer report: