Connector deployment example

If you want to deploy a connector to Elasticsearch so that the service can process the exported data, you might use Kibana with Elasticsearch to display data dashboards.

Before you can deploy the connector, you complete the following prerequisites:

  • Deploy Elasticsearch to a VM on the network where you are running Turbonomic. The Elasticsearch host is visible from the Turbonomic Kubernetes node. You will specify this host address in the connector deployment.

  • Set up an Elasticsearch index to load the Turbonomic data. You specify this index in the connector deployment.

The following listing is a deployment that uses a Logstash image to collect the extractor data and pipe it to the Elasticsearch host. The deployment also sets up storage volumes, configures the input from the extractor, and configures output to the Elasticsearch instance.

As you go over the listing, pay attention to the following details:

  • The location of the Elasticsearch host and the login credentials:

    ... 
      env:
        - name: ES_HOSTS
          value: "<urltomyelasticsearchhost>"
        - name: ES_USER
          value: "<myelasticsearchuser>"
        - name: ES_PASSWORD
          valueFrom: 
            secretKeyRef: 
              name: <MyES_KeyName> 
              key: <MyES_Key>
    ...

    Logstash uses the following environment variables:

    • ES_HOSTS: to identify where to pipe the exported data.

    • ES_USER: to identify the user account on Elasticsearch.

    • ES_PASSWORD: for the account login. This connector example assumes that you store the Elasticsearch password as a Kubernetes Secret.

  • The name of the Kafka topic:

    ...
      output {
        elasticsearch {
          index => "<MyElasticsearchIndex>"
          hosts => [ "${ES_HOSTS}" ]
        }
      }
    ...
    The Logstash input configuration expects a single topic named turbonomic.exporter.
  • The Logstash output configuration is to the Elasticsearch server that is identified by the ES_HOSTS environment variable. You specify your own Elasticsearch index in place of <MyElasticsearchIndex>.

    ... output { elasticsearch { index => "<MyElasticsearchIndex>" hosts => [ "${ES_HOSTS}" ] } } ...

Sample listing: Elasticsearch connector

This listing is a sample of a deployment file that can work to create an Elasticsearch connector for the Data Exporter. You need to change some settings, such as username and password. You might also need to specify ports and other settings to make the connector comply with your specific environment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch-kafka-connect
  labels:
    app.kubernetes.io/name: elasticsearch-kafka-connect
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: elasticsearch-kafka-connect
  template:
    metadata:
      labels:
        app.kubernetes.io/name: elasticsearch-kafka-connect
    spec:
      containers:
      - name: logstash
        image: docker.elastic.co/logstash/logstash:7.10.1
        ports:
          - containerPort: 25826
        env:
          - name: ES_HOSTS
            value: "<UrlToMyElasticsearchHost>"
          - name: ES_USER
            value: "<MyElasticsearchUser>"
          - name: ES_PASSWORD
            valueFrom:
              secretKeyRef:
                name: <MyES_KeyName>
                key: <MyES_Key>
        resources:
          limits:
            memory: 4Gi
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/logstash/config
          - name: logstash-pipeline-volume
            mountPath: /usr/share/logstash/pipeline
      volumes:
      - name: config-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.yml
              path: logstash.yml
      - name: logstash-pipeline-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.conf
              path: logstash.conf
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-configmap
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    input {
      kafka {
        topics => ["turbonomic.exporter"]
        bootstrap_servers => "kafka:9092"
        client_id => "logstash"
        group_id => "logstash"
        codec => "json"
        type => "json"
        session_timeout_ms => "60000"   # Rebalancing if consumer is found dead
        request_timeout_ms => "70000"   # Resend request after 70 seconds
      }
    }
    filter {
    }
    output {
      elasticsearch {
        index => "<MyElasticsearchIndex>"
        hosts => [ "${ES_HOSTS}" ]
        user => "${ES_USER}"
        password => "${ES_PASSWORD}"
      }
    }
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: elasticsearch-kafka-connect
  name: elasticsearch-kafka-connect
spec:
  ports:
    - name: "25826"
      port: 25826
      targetPort: 25826
  selector:
    app: elasticsearch-kafka-connect