Configuring output plugins for analytics offload

In API Connect version 2018.3.7, and later, you can configure output plugins for third-party systems by editing the outputs.yml and offload_output.conf files. In earlier versions, you can configure output plugins for third-party systems in the logstash.conf file to offload the analytics data for API Connect. The output plugins point to one of the following target systems: HTTP, Elasticsearch, Kafka, and Syslog servers. The output plugin for displaying analytics data in the API Manager is included by default.

About this task

Important: After you upgrade from a version of API Connect that is earlier than 2018.3.7 to version 2018.3.7, or later, you must reconfigure any customizations that you made in your logstash.conf file. Beginning with version 2018.3.7, the sections of the logstash.conf file are separated into smaller files to make them easier to use. Use the following chart to recreate the customizations:
Location of customization within logstash.conf file Type of customization Customization with new configuration files
input Any customizations This was unsupported in the previous versions and remains unsupported.
filter Enable geoip In the install/upgrade section of the extra_values.yaml file, set the apic-analytics-ingestion.geoIpEnabled value to true.
filter Additions to the pipeline (pre-apiconnect changes) Add these to the 49_apic_filter.conf file.
filter Additions to the pipeline (post-apiconnect changes) Add these to the 69_apic_filter.conf file.
output Disable the API Connect native analytics storage. Set apic_output_enabled to the value of false in the outputs.yml or set apic-analytics-ingestion.outputApicEnabled to false in the install/upgrade section of the extra_values.yaml file.
output Add offload configuration Complete one of the following procedures to add the offload configuration:
  • Set offload_output_enabled to true in the outputs.yml file
  • In the install/upgrade section of the extra_values.yamlfile, set the value of apic-analytics-ingestion.outputOffloadEnabled to true.
After you complete either of these steps, you must add your offload configuration to the offload_output.conf file.
If you need to review the previous changes to your logstash.conf file, they are stored in the analytics-ingestion-pipeline configuration map in the Kubernetes instance.

One of the following roles is required to configure the output plugins for analytics offload:

  • Administrator
  • Topology Administrator
  • Owner
  • A custom role with the topology:manage permission

Output plugins control the analytics data offload to both the API Manager or to a third-party system. The data will be offloaded to the output plugins that are configured in the offload_output.conf file.

If you are using SSL to secure the communications between API Connect and your offload endpoint, then you will need to include a path to a file containing a list of trusted certificates. Both KeyStore and PEM bundles are available.

Note: The example uses default certificates. If you want to provide private or self-signed certificates instead, configure them as explained in Providing a custom certificate for analytics offload and then reference them in your logstash configuration.
Keystore:
Keystore: `/usr/lib/jvm/jre/lib/security/cacerts`
passphrase: `changeit`
type: `JKS`
PEM:
cacert: `/etc/pki/tls/cert.pem`
Important: When you configure the offload of analytics data, or enable or disable access to analytics data in API Connect, the configuration is applied across all provider and consumer organizations in the cloud infrastructure.

Procedure

Complete the following steps to configure analytics offload for your cloud:

  1. In the Cloud Manager, clickTopologyTopology.
  2. Locate the Analytics service that is associated with the Gateway service, then click the title to open the Edit Analytics Service page.
  3. Click the Advanced Analytics Configuration link to load Kibana.
  4. In Kibana, click Ingestion in the API Connect section to open the configuration files selection page in the editor.
  5. Open the outputs.yml file by selecting its tab.
  6. Change the value of offload_output_enabled: to true.
  7. Save the values.
  8. Open the offload_output.conf file by selecting its tab.
  9. Add your new offload configuration within the if statement. The following content is an example of this file:
    output {
      if "apicapievent" in [tags] {
        kafka {
          topic_id => "test"
          bootstrap_servers => "example.com:9093"
          codec => "json"
          id => "kafka_offload"
          ssl_truststore_location => "/usr/share/logstash/jdk/lib/security/cacerts"
          ssl_truststore_password => "changeit"
          ssl_truststore_type => "JKS"
          security_protocol  => SSL
        }
      }
    }
    
    Note: To disable storing the data in API Manager, change the apic_output_enabledsetting in the outputs.yml file to disabled as shown in: Disabling access to analytics event data in API Connect.
  10. The following examples show how to configure output plugins in offload_output.conf for each supported offload type. These examples use SSL with a public SSL certificate mounted at the endpoint. For a complete list of configuration options, refer to the logstash documentation.
    HTTP
    The following example configures an HTTP output plugin. For more options, refer to the Logstash documentation: Http output plugin.
    http {
       url => "example.com"
       http_method => "post"
       codec => "json"
       content_type => "application/json"
       id => "offload_http"
       truststore => "/usr/lib/jvm/jre/lib/security/cacerts"
       truststore_password => "changeit"
       truststore_type => "JKS"
    }

    The following example configures an HTTP output plugin using a custom certificate. For more information on using a custom certificate for your offload endpoint, see Providing a custom certificate for analytics offload.

    http {
       url => "https://example.com:443"
       http_method => "post"
       codec => "json"
       content_type => "application/json"
       id => "offload_http"
       cacert => "${OFFLOAD_CERTS_DIR}/cacert.pem"
       client_cert => "${OFFLOAD_CERTS_DIR}/client.pem"
       client_key => "${OFFLOAD_CERTS_DIR}/client.key"
    }
    
    Elasticsearch
    The following example configures an Elasticsearch output plugin. For more options, refer to the Logstash documentation: Elasticsearch output plugin.
    elasticsearch {
          hosts => "https://example.com:443"
          index => "apiconnect"
          ssl  => true
          cacert => "/etc/pki/tls/cert.pem"
    }
    Kafka
    The following example configures a Kafka output plugin. For more options, refer to the Logstash documentation: Kafka output plugin.
    kafka {
          topic_id => "test"
          bootstrap_servers => "example.com:9093"
          codec => "json"
          id => "kafka_offload"
          ssl_truststore_location => "/usr/share/logstash/jdk/lib/security/cacerts"
          ssl_truststore_password => "changeit"
          ssl_truststore_type => "JKS"
          security_protocol  => SSL
    }
    Syslog
    The following example configures a Kafka output plugin. For more options, refer to the Logstash documentation: Syslog output plugin.
    Restriction: Syslog does not support the use of chained client certificates for TLS profiles.
    syslog {
      host => "example.com"
      port => 601
      protocol => "ssl-tcp"
      id => "offload_syslog"
      appname => "apiconnect"
      msgid => "%{org_id}"
      facility => "log audit"
      severity => "informational"
      codec => "json"
      ssl_cacert => "/etc/pki/tls/cert.pem"
    }
  11. Click Save.
    Important: It might take a few minutes for the new configuration to be applied to all your pods. Check the logs on the pods to ensure that there are no Logstash errors.
  12. If you want to include the client_geoip and gateway_geoip fields in your Analytics data, configure your Kubernetes ingresses/cluster to include the X-Forwarded-For header in the data that is collected by the DataPower gateway and passed to APIC Analytics.

    This step is only required if you want to include the fields in your Analytics data. For information on the configuration task, see the following Kubernetes documentation:

What to do next

Log in to the target system and verify that you can see the data stream for API Connect events.
Attention: The third-party endpoint must be available at all times to prevent the loss of analytics data.