Sample output plugins for offloading data
Review sample output plugins to see how to code your own plugins for offloading data from API Connect analytics.
For complete details of output plugin syntax, see the Logstash output plugins documentation.
HTTP
http {
url => "example.com"
http_method => "post"
format => "json"
content_type => "application/json"
headers => {
"x-acme-index" => "8"
"x-acme-key" => "0d5d259f-b8c5-4398-9e58-77b05be67037"
}
id => "offload_http"
}
http {
url => "https://example.com:443"
http_method => "post"
format => "json"
content_type => "application/json"
headers => {
"x-acme-index" => "8"
"x-acme-key" => "0d5d259f-b8c5-4398-9e58-77b05be67037"
}
id => "offload_http"
ssl_certificate_authorities => "/etc/velox/external_certs/offload/cacert"
}
For more information about offloading to HTTP, refer to the HTTP output plugin documentation.
Elasticsearch
- One or more server Elasticsearch server URLs.
- A dynamically configured name to define how indices are created.
- The number of primary shards for storing the indexed documents.
- The number of replica shards for redundancy.
- Server authentication credentials (if requried).
The naming syntax that you specify for your indices determines how many indices are created and how they are characterized. For example, you can define indices that are created based on a date pattern, or indices that are created to store data by provider organization or by API name.
elasticsearch {
hosts => "https://example.com:443"
index => "apiconnect"
ssl => true
cacert => "/etc/velox/external_certs/offload/cacert"
}
For more information about offloading to Elasticsearch, refer to the Elasticsearch output plugin documentation.
Apache Kafka
To configure data offload to a Kafka cluster, provide host and port connection details for one or
more servers, and the name of the Kafka topic. To be able to identify API Connect data, include
the string identifier id
.
kafka {
topic_id => "api_events"
bootstrap_servers => "example.com:9093"
codec => "json"
id => "apic_kafka_offload"
ssl_truststore_location => "/usr/share/logstash/jdk/lib/security/cacerts"
ssl_truststore_password => "changeit"
ssl_truststore_type => "JKS"
security_protocol => SSL
}
kafka {
topic_id => "api_events"
bootstrap_servers => "example.com:9093"
id => "kafka_offload"
kerberos_config => {
keytab => "<keytab_file>"
principal => "<kerberos_principal>"
service_name => "kafka"
}
}
Each field label represents the following:- kerberos_config: Contains the Kerberos authentication settings
- keytab: Path to the Kerberos keytab file
- principal: The Kerberos principal name used for authentication
- service_name: Service name for Kerberos
For more information on offloading to Apache Kafka, refer to the Kafka output plugin documentation.
Syslog
To configure data offload to Syslog, you must provide host and port connection details for the server.
syslog {
host => "example.com"
port => 601
protocol => "ssl-tcp"
id => "offload_syslog"
appname => "apiconnect"
msgid => "%{org_id}"
facility => "user-level"
severity => "informational"
codec => "json"
ssl_cacert => "/etc/velox/external_certs/offload/cacert"
}
For more information about offloading to Syslog, refer to the Syslog output plugin documentation.
OpenSearch
Noname
To configure data offload to Noname, see the Noname documentation.
http {
url => "https://<ENGINE_URL>/engine?structure=ibm-apiconnect"
http_method => "post"
codec => "json"
content_type => "application/json"
headers => ["x-nn-source-index", "<INDEX>", "x-nn-source-key", "<KEY>"]
id => "noname-offload"
ssl_certificate_authorities => "/etc/velox/external_certs/offload/cacert"
}
Combining offload configuration with filters
The following examples show how to use filters to control what data is offloaded.
catalog1
only: offload:
enabled: true
if [catalog_name] == "catalog1" {
http {
url => "http://offloadhost.example.com/offload"
http_method => "post"
codec => "json"
content_type => "application/json"
id => "offload_http"
}
}
offload:
enabled: true
output: |
if [catalog_name] == "catalog1" {
http {
url => "http://offloadhost.example.com /offload1"
http_method => "post"
codec => "json"
content_type => "application/json"
id => "offload1_http"
}
}
else if [catalog_name] == "catalog2" {
http {
url => "http://offloadhost2.example.com /offload2"
http_method => "post"
codec => "json"
content_type => "application/json"
id => "offload2_http"
}
}
offload:
enabled: true
http {
url => "http://offloadhost.example.com /%{[catalog_name]}"
http_method => "post"
codec => "json"
content_type => "application/json"
id => "offload_http"
}
apic
and sandbox
catalogs:offload:
enabled: true
if [catalog_name] !~ /sandbox|apic/ {
http {
url => "http://offloadhost.example.com /%{[catalog_name]}"
http_method => "post"
codec => "json"
content_type => "application/json"
id => "offload_http"
}
}