Installing cluster logging
Follow the steps in the Red Hat® OpenShift® Container Platform documentation to install cluster logging. Ensure that you set up valid storage for Elasticsearch by using persistent volumes.
To install cluster logging, follow the steps in Installing OpenShift Logging (Red Hat OpenShift Container Platform 4.16). If you are using a different version of Red Hat OpenShift, select the appropriate version on the Red Hat OpenShift documentation page. Make sure that you install the correct version of Red Hat OpenShift Logging. For more information about the correct version to use, see Software requirements.
Ensure that you set up valid storage for Elasticsearch by using persistent volumes. When the example cluster logging instance YAML is deployed, the Elasticsearch pods that are created automatically search for persistent volumes to bind to. If no persistent volumes are available for binding, the Elasticsearch pods are stuck in a pending state.
Using in-memory storage is also possible by removing the storage
definition from
the cluster logging instance YAML, but this arrangement is not suitable for production.
Installing on a self-managed cluster on AWS
- Before you create an operator group for the Red Hat
OpenShift logging operator, determine whether this operator group already exists. If this group
exists, delete it.
- To check whether this group exists, run this
command:
oc get operatorgroup -n openshift-logging
The groups are shown in the NAME column in the output.
- If the
openshift-logging
group exists, delete it by running this command:oc delete operatorgroup openshift-logging
- To check whether this group exists, run this
command:
- When you create the subscription object YAML file to subscribe to the Red Hat OpenShift logging operator, set the value of the spec.channel attribute to
stable
. - When you create the cluster logging instance YAML file, set the value of the
spec.logStore.elasticsearch.storage.storageClassName
attribute togp3
.
Example cluster logging instance YAML file for IBM Cloud Pak for Network Automation
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
retentionPolicy:
application:
maxAge: 1d
infra:
maxAge: 7d
audit:
maxAge: 7d
elasticsearch:
nodeCount: 3
resources:
requests:
memory: "2Gi"
proxy:
limits:
memory: 256Mi
requests:
memory: 256Mi
storage:
size: 200G
storageClassName: "rook-cephfs"
redundancyPolicy: "SingleRedundancy"
visualization:
type: "kibana"
kibana:
replicas: 1
collection:
logs:
type: "fluentd"
fluentd: {}
Pods running after installation
openshift-logging
namespace. The exact number of pods that run for
each of the EFK components can vary depending on the configuration that is specified in the
ClusterLogging
custom resource
(CR).NAME READY STATUS RESTARTS AGE
cluster-logging-operator-5848775d98-5gk62 1/1 Running 0 39m
collector-9tx47 2/2 Running 0 37m
collector-cqclk 2/2 Running 0 37m
collector-l9vj8 2/2 Running 0 37m
collector-mkpgz 2/2 Running 0 37m
collector-qws7m 2/2 Running 0 37m
collector-rh5zm 2/2 Running 0 37m
elasticsearch-cdm-02i0k8eg-1-b979f5cf8-f8nlc 2/2 Running 0 37m
elasticsearch-cdm-02i0k8eg-2-56ffbf9959-k2ptq 2/2 Running 0 37m
elasticsearch-cdm-02i0k8eg-3-5cf98b5d64-hgx5t 2/2 Running 0 37m
elasticsearch-im-app-28072155-86sz2 0/1 Completed 0 13m
elasticsearch-im-audit-28072155-bzpf5 0/1 Completed 0 13m
elasticsearch-im-infra-28072155-cg65f 0/1 Completed 0 13m
kibana-75f97c56df-r2989 2/2 Running 0 37m
The cluster logging also exposes a route for external access to the Kibana console.
oc get routes -n openshift-logging
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
kibana <kibana_route> kibana <all> reencrypt/Redirect None