Sample output plugins for offloading data

Review sample output plugins to see how to code your own plugins for offloading data from API Connect Analytics.

Configure output plugins that direct analytics data to your third-party systems. An output plugin specifies where analytics data is offloaded, plus any information required to access that system, such as certificate information. The output plugin for displaying analytics data in the Analytics user interface is configured by default.

Referencing a certificate in an output plugin

If your third-party system requires a private or self-signed certificate, you can include a reference to the certificate in your output plugin. See the second HTTP output plugin for an example showing how to reference the certificate.

Important: Analytics must be deployed before you attempt to use the certificate. If you want to configure an output plugin that references a certificate, you must install the Analytics subsystem first, then create a secret and apply it to the Analytics deployment, configure the output plugin, and re-install Analytics. For instructions on the correct procedure for applying a certificate to the Analytics deployment, see Providing a custom certificate for Analytics offload.

HTTP

To configure data offload to an HTTP server, you must provide the server URL. You can optionally add standardized or custom HTTP headers to define additional operating parameters.

The following example configures an HTTP output plugin.

http {
   url => "example.com"
   http_method => "post"
   codec => "json"
   content_type => "application/json"
   headers => {
      "x-acme-index" => "8"
      "x-acme-key" => "0d5d259f-b8c5-4398-9e58-77b05be67037"
   }
   id => "offload_http"
}

The following example configures an HTTP output plugin that references a custom certificate for accessing the third-party endpoint.

http {
   url => "https://example.com:443"
   http_method => "post"
   codec => "json"
   content_type => "application/json"
   headers => {
      "x-acme-index" => "8"
      "x-acme-key" => "0d5d259f-b8c5-4398-9e58-77b05be67037"
   }
   id => "offload_http"
   cacert => "/etc/velox/external_certs/offload/cacert.pem"
   client_cert => "/etc/velox/external_certs/offload/client.pem"
   client_key => "/etc/velox/external_certs/offload/client.key"
}

For more information on offloading to HTTP, refer to the Logstash documentation on configuring the HTTP output plugin.

Elasticsearch

To configure data offload to an Elasticsearch cluster, you must provide one or more server URLs, define how indices should be created by specifying a dynamically configured name, specify the number of primary shards for storing the indexed documents, and specify a number of replica shards for redundancy. You can optionally specify server authentication credentials.

The naming syntax that you specify for your indices determines how many indices are created and how they are characterized. For example, you can define indices that are created based on a date pattern (for example, daily or weekly), or indices that are created to store data by provider organization or by API name.

The following example configures an Elasticsearch output plugin.

elasticsearch {
      hosts => "https://example.com:443"
      index => "apiconnect"
      ssl  => true
      cacert => "/etc/pki/tls/cert.pem"
}

For more information on offloading to Elasticsearch, refer to the Elasticsearch documentation on the Elasticsearch output plugin.

Apache Kafka

To configure data offload to a Kafka cluster, you must provide host and port connection details for one or more servers, and the name of the Kafka topic to which you want to publish the offloaded data. For logging and monitoring purposes within Kafka, you can optionally specify a string identifier by which API Connect can be uniquely identified.

The following example configures a Kafka output plugin.

kafka {
      topic_id => "test"
      bootstrap_servers => "example.com:9093"
      codec => "json"
      id => "kafka_offload"
      ssl_truststore_location => "/usr/lib/jvm/jre/lib/security/cacerts"
      ssl_truststore_password => "changeit"
      ssl_truststore_type => "JKS"
      security_protocol => SSL
}

For more information on offloading to Apache Kafka, refer to the Logstash documentation on the Kafka output plugin.

Syslog

To configure data offload to Syslog, you must provide host and port connection details for the server.

Restriction: Syslog does not support the use of chained client certificates for TLS profiles.

The following example configures a Syslog output plugin.

syslog {
  host => "example.com"
  port => 601
  protocol => "ssl-tcp"
  id => "offload_syslog"
  appname => "apiconnect"
  msgid => "%{org_id}"
  facility => "log audit"
  severity => "informational"
  codec => "json"
  ssl_cacert => "/etc/pki/tls/cert.pem"
} 
Attention: Due to a third-party issue with Logstash, most of the facility values are not set correctly on the offload server. As a temporary workaround, set the facility to "user-level" to ensure that the corresponding value is correctly sent to the offload server. When Logstash corrects this problem, you must update the offload configuration to change the value to a more appropriate setting.

For more information on offloading to Syslog, refer to the Logstash documentation on the Syslog output plugin.

OpenSearch

To configure data offload to OpenSearch: https://opensearch.org/docs/latest/clients/logstash/ship-to-opensearch/