When Netcool® Operations Insight® is deployed
in an IBM Cloud® Private environment, you can forward log
data to Netcool Operations Insight from Logstash.
Before you begin
By default, the IBM Cloud Private installer
deploys an Elasticsearch, Logstash and Kibana (ELK) stack to collect system logs for the IBM Cloud Private managed services, including Kubernetes and Docker.
For more information, see https://www.ibm.com/docs/en/SSBS6K_2.1.0.2/manage_metrics/logging_elk.html
The following event types are supported for this integration:
Note: Ensure you meet the prerequisites for
IBM Cloud Private, such as installing and configuring the
kubectl, the Kubernetes command line tool.
About this task
The log data collected and stored by Logstash for your IBM Cloud Private environment can be configured to be forwarded to
event management as event information and then
correlated into incidents.
Procedure
-
Click
.
-
Click New integration.
- Go to the Logstash tile and click
Configure.
- Enter a name for the integration and click
Copy to add the generated webhook URL to the clipboard. Ensure you save the
generated webhook to make it available later in the configuration process. For example, you can save
it to a file.
- Click Save.
-
Modify the default Logstash configuration in IBM Cloud Private to add event management as a receiver. To do this, edit the
Logstash pipeline ConfigMap to add the webhook URL in the output section as follows:
- Load the ConfigMap into a file using the following command:
kubectl get configmaps logstash-pipeline --namespace=kube-system -o yaml >
logstash-pipeline.yaml
Note: The default Logstash deployment ConfigMap name in IBM Cloud Private is logstash-pipeline
in the
kube-system
namespace. If your IBM Cloud Private logging uses a different Logstash deployment,
modify the ConfigMap name and namespace as required for that deployment.
- Edit the logstash-pipeline.yaml file and add an HTTP section to
specify event management as a destination using
the generated webhook URL. Paste the webhook URL into the url field:
output {
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts => "elasticsearch:9200"
}
http {
url => "<Cloud_Event_Management_webhook_URL>"
format => "json"
http_method => "post"
pool_max_per_route => "5"
}
}
Note: The pool_max_per_route
value is set to 5 by default. It limits
the number of concurrent connections to event management to avoid data overload from Logstash. You
can modify this setting as required.
- Save the file, and replace the ConfigMap using the following command:
kubectl --namespace kube-system replace -f logstash-pipeline.yaml
- Check the update is complete at
https://<icp_master_ip_address>:8443/console/configuration/configmaps/kube-system/logstash-pipeline
Note: It can take up to a minute for the configuration changes to take affect.
- To start receiving log data from Logstash, ensure that Enable event management from this
source is set to On..
Conrefs updated in this topic but the following questions need to be answered: does this scenario make sense in a NOI on OpenShift® setting? This topic assumes that both Logstash and CEM are running on ICP. Logstash is feeding log data from other applications running on ICP into CEM using a Webhook. How will this work if Logstash is running on ICP but NOI is running on Openshift? Does that scenario make sense? Or would both Logstash and NOI need to be running on OpenShift?If the latter, how do the steps in the steps in the topic change.