Sample output plugins for offloading data

Review sample output plugins to see how to code your own plugins for offloading data from API Connect analytics.

For complete details of output plugin syntax, see the Logstash output plugins documentation.

HTTP

Example HTTP output plug-in:
http {
   url => "example.com"
   http_method => "post"
   format => "json"
   content_type => "application/json"
   headers => {
      "x-acme-index" => "8"
      "x-acme-key" => "0d5d259f-b8c5-4398-9e58-77b05be67037"
   }
   id => "offload_http"
}
Example HTTP output plug-in that references a custom certificate for accessing the third-party endpoint:
http {
   url => "https://example.com:443"
   http_method => "post"
   format => "json"
   content_type => "application/json"
   headers => {
      "x-acme-index" => "8"
      "x-acme-key" => "0d5d259f-b8c5-4398-9e58-77b05be67037"
   }
   id => "offload_http"
   ssl_certificate_authorities => "/etc/velox/external_certs/offload/cacert"
}
Note: Ensure that the endpoint, which receives the offload data, is able to handle the raw JSON data.

For more information about offloading to HTTP, refer to the HTTP output plugin documentation.

Elasticsearch

To configure data offload to an Elasticsearch cluster, you must specify the following properties:
  • One or more server Elasticsearch server URLs.
  • A dynamically configured name to define how indices are created.
  • The number of primary shards for storing the indexed documents.
  • The number of replica shards for redundancy.
  • Server authentication credentials (if requried).

The naming syntax that you specify for your indices determines how many indices are created and how they are characterized. For example, you can define indices that are created based on a date pattern, or indices that are created to store data by provider organization or by API name.

Example Elasticsearch output plugin:
elasticsearch {
      hosts => "https://example.com:443"
      index => "apiconnect"
      ssl  => true
      cacert => "/etc/velox/external_certs/offload/cacert" 
}

For more information about offloading to Elasticsearch, refer to the Elasticsearch output plugin documentation.

Apache Kafka

To configure data offload to a Kafka cluster, provide host and port connection details for one or more servers, and the name of the Kafka topic. To be able to identify API Connect data, include the string identifier id.

Example Kafka output plug-in:
kafka {
      topic_id => "api_events"
      bootstrap_servers => "example.com:9093"
      codec => "json"
      id => "apic_kafka_offload"
      ssl_truststore_location => "/usr/share/logstash/jdk/lib/security/cacerts"
      ssl_truststore_password => "changeit"
      ssl_truststore_type => "JKS"
      security_protocol => SSL
}
You can configure Kafka with an external certificate and a keytab file. The example below shows how to set up Kafka offload with Kerberos authentication to secure data transmission to the Kafka topic.
  1. Encode keytab file.
    cat keystore.keytab | base64
  2. Create a secret to contain the keytab file.
    apiVersion: v1
             kind: Secret   
             metadata:
             name: offload-keytab
             data:
             keystore.keytab: "<base64_encoded_keytab>"
     
  3. Replace base64_encoded_keytab with the base64-encoded keytab content.
  4. Save the file.
  5. Apply the secret.
    kubectl apply -f offload_keytab.yaml -n <namespace> 
  6. Edit the Analytics CR and add a reference to the secret in the offload section. For information, refer to the following sample configuration.
    
        offload:
        enabled: true
        output: |
        kafka {
        topic_id => "test"
        bootstrap_servers => "example.com:9093"
        id => "kafka_offload"
        kerberos_config => {
        keytab => "/etc/velox/external_certs/offload-keytab/keystore.keytab"
        principal => "<kerberos_principal>"
        service_name => "kafka"
          }
        }
      secretName: offload-keytab
    Each field label represents the following:
    • topic_id: The ID of the Kafka topic where data will be sent
    • bootstrap_servers: The IP address and port of the Kafka broker
    • id: A unique id for Kafka output configuration
    • kerberos_config: Contains the Kerberos authentication settings
    • keytab: Path to the Kerberos keytab file
    • principal: The Kerberos principal name used for authentication
    • service_name: Service name for Kerberos

For more information on offloading to Apache Kafka, refer to the Kafka output plugin documentation.

Syslog

To configure data offload to Syslog, you must provide host and port connection details for the server.

Restriction: Syslog does not support the use of chained client certificates for TLS profiles.
Example Syslog output plug-in:
syslog {
  host => "example.com"
  port => 601
  protocol => "ssl-tcp"
  id => "offload_syslog"
  appname => "apiconnect"
  msgid => "%{org_id}"
  facility => "user-level"
  severity => "informational"
  codec => "json"
  ssl_cacert => "/etc/velox/external_certs/offload/cacert" 
} 
Attention: Due to a third-party issue with Logstash, most of the facility values are not set correctly on the offload server. As a temporary workaround, set the facility to "user-level" to ensure that the corresponding value is correctly sent to the offload server. When Logstash corrects this problem, you must update the offload configuration to change the value to a more appropriate setting.

For more information about offloading to Syslog, refer to the Syslog output plugin documentation.

OpenSearch

To configure data offload to OpenSearch, see the Ship to OpenSearch documentation.

Noname

To configure data offload to Noname, see the Noname documentation.

Example Noname offload plugin:
http {
  url => "https://<ENGINE_URL>/engine?structure=ibm-apiconnect"
  http_method => "post"
  codec => "json"
  content_type => "application/json"
  headers => ["x-nn-source-index", "<INDEX>", "x-nn-source-key", "<KEY>"]
  id => "noname-offload"
  ssl_certificate_authorities => "/etc/velox/external_certs/offload/cacert"
}

Combining offload configuration with filters

The following examples show how to use filters to control what data is offloaded.

Offload events from catalog1 only:
   offload:
     enabled: true
     if  [catalog_name] == "catalog1" {
          http {
            url => "http://offloadhost.example.com/offload"
            http_method => "post"
            codec => "json"
            content_type => "application/json"
            id => "offload_http"
          }
        }
Offload to different locations based on catalog name:
offload:
  enabled: true
  output: |
    if  [catalog_name] == "catalog1" {
      http {
        url => "http://offloadhost.example.com /offload1"
        http_method => "post"
        codec => "json"
        content_type => "application/json"
        id => "offload1_http"
      }
    }
    else if  [catalog_name] == "catalog2" {
      http {
        url => "http://offloadhost2.example.com /offload2"
        http_method => "post"
        codec => "json"
        content_type => "application/json"
        id => "offload2_http"
      }
    }
Use the catalog name as a variable in the offload location:
 offload:
   enabled: true
   http {
     url => "http://offloadhost.example.com /%{[catalog_name]}"
     http_method => "post"
     codec => "json"
     content_type => "application/json"
     id => "offload_http"
   }

Do not offload events from apic and sandbox catalogs:
offload:
   enabled: true
   if  [catalog_name] !~ /sandbox|apic/ {
     http {
       url => "http://offloadhost.example.com /%{[catalog_name]}"
       http_method => "post"
       codec => "json"
       content_type => "application/json"
       id => "offload_http"
     }
   }