Configuring the Receiver cluster for single line logs
To implement a scalable data collection architecture, install and configure a cluster of Logstash servers to receive data from the LFA and write it to Apache Kafka.
Before you begin
Install Logstash on the remote servers and create the required utility script. For more information, see Installing Logstash and the utility script.
About this task
You must create at least one Logstash server to act as a receiver. In a production environment, you need to use more than one instance of Logstash in a cluster.
You need to complete this task for each instance of Logstash in your cluster.
Procedure
Example
The following example processes events where the type is
lfa
and matches these
to the patterns. The datasource
and resourceID
are also added
based on the metadata in the
event.filter {
if [type] == lfa
{
grok {
patterns_dir => home/la/logstash/patterns
match => [ message
, %{LFAMESSAGE}
]
add_tag => [grok_lfa
]
}
}
if grok_lfa
in [tags] {
mutate {
replace => [message
,%{LFA_ORIG_MSG}
]
add_field => [ datasource
,
%{LFA_SITE}_%{LFA_MODULE}_%{LFA_TYPE}
]
add_field => [ resourceID
,
%{LFA_HOSTNAME}_%{LFA_LOGNAME}_1
]
}
}
}
The
output
section writes data to the Apache Kafka cluster while mapping the
data source to the topic_id
parameter. This configuration ensures that one topic is
created for the logical data source. It also ensures that data from each physical data source is
written to the same partition within the topic. For
example:
ouput{
if (grok_lfa
in [tags]) and ! (_grokparsefailure
in [tags]) {
kafka {
bootstrap_servers =>kafkabroker1.example.com:17911,
kafkabroker2.example.com:17911
topic_id => %{datasource}
message_key => %{resourceID}
}
}