Guardium Insights logging
For Guardium Insights, logging is deployed to a Red Hat® OpenShift® Container Platform Version 4.14.x environment. The cluster logging components are based on Elasticsearch, Fluentd, and Kibana (EFK).
OpenShift logging components
Fluentd, the collector, is deployed to each node in the OpenShift Container Platform cluster. It collects all node and container logs and writes them to Elasticsearch. Kibana is the centralized web user interface where users and administrators can create rich visualizations and dashboards with the aggregated data.
The following list includes the five different types of cluster-logging components:
- The logStore is where the logs are stored. The current implementation is based onElasticsearch.
- The collection is the component that collects logs from the node, formats them, and stores them in the logStore. The current implementation is based on Elasticsearch Fluentd.
- Visualization is the user interface component that is used to view logs, graphs, charts, and other data. The current implementation is based on Kibana.
- Curation is the component that filters logs by age. The current implementation is based on Elasticsearch Curator.
- Event routing is the component that forwards OpenShift Container Platform events to cluster logging. The current implementation is based on OpenShift Event Router.
For more information, see OpenShift logging.
Pod distribution
For Guardium Insights, logging is deployed to an OpenShift Container Platform Version 4.14.x environment with these resources:
3 master nodes with 32GB RAM and 8 Core Processor each
3 worker nodes with 32GB RAM and 8 Core Processor each
The CPU and Memory resources per component are outlined in this table.
Pod | Node | CPU Limit | Memory Limit | CPU Request | Memory Request | Replicas |
---|---|---|---|---|---|---|
cluster-logging-operator | Worker | N/A | N/A | 100 m | 1Gi | 1 |
curator | Master | 200 m | 200Mi | 200 m | 200Mi | 1 |
elasticsearch-cdm | Master | 4000 m | 8Gi | 4000 m | 8Gi | 2 |
fluentd | Master and Worker | 200 m | 1Gi | 200 m | 1Gi | All nodes |
kibana | Master | N/A | 1Gi | 500 m | 1Gi | 1 |
oc get pods -n openshift-logging
Installing logging components in your cluster
To install the logging components in your cluster, follow the steps that are detailed in the OpenShift Container Platform Version 4.14.x logging documentation (https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/logging/cluster-logging-deploying#cluster-logging-deploy-clo_cluster-logging-deploying).
Complete the following configuration in the Guardium Insights logging deployment.
-
When you follow the OpenShift Container Platform Version 4.14.x documentation (Section 3.2, step 4 (https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/logging/cluster-logging-deploying#cluster-logging-deploy-clo_cluster-logging-deploying)), create a cluster-logging instance by copying the following YAML file to deploy the
ClusterLogging
instance. By using this modified custom resource definition (CRD), the logging components are deployed on master nodes and they use the specified tuned resources.apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 2 resources: limits: memory: 8Gi requests: cpu: 4000m memory: 8Gi storage: size: "30G" redundancyPolicy: "SingleRedundancy" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists visualization: type: "kibana" kibana: resources: limits: memory: 1Gi requests: cpu: 500m memory: 1Gi replicas: 1 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists curation: type: "curator" curator: resources: limits: memory: 200Mi requests: cpu: 200m memory: 200Mi schedule: "15 * * * *" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists collection: logs: type: "fluentd" fluentd: resources: limits: memory: 1Gi requests: cpu: 200m memory: 1Gi
- To create the Cluster Logging Custom Resource and Elasticsearch Custom Resource, click Create.
- To verify the installation, run the following
command:
oc get pods -n openshift-logging
The expected result has all logging pods in a running state. When the result is validated, edit the Curator configuration to delete logs after 7 days instead of the default 30 days.
- To switch to the
openshift-logging
namespace, run the following command:oc project openshift-logging
- To edit the curator configuration, run the following
command:
oc edit configmap/curator
Locate this part of the file:
# uncomment and use this to override the defaults from env vars #.defaults: # delete: # days: 30
For example, to allow deletion of logs after 7 days, edit the days attribute as shown in the following example:
# uncomment and use this to override the defaults from env vars .defaults: delete: days: 7
-
Save
configmap/curator
.
How to access Kibana in your cluster
You can obtain the URL for Kibana from the command-line interface (CLI).
- Run the following command to find the route to Kibana.
The command returns a URL, as follows:oc get routes -n openshift-logging
http://kibana-openshift-logging.hostname.com:port
- To reach Kibana, copy the URL to a browser and log in with your OpenShift user interface credentials.
Logging can also be started from the OpenShift
Console in the Monitoring
section.