(Optional) Enabling the Data Exporter

To support Data Export, Turbonomic provides an extractor component that can stream data to a standard format. You can load that data into search and analytics services such as Elasticsearch.

To enable the Data Exporter, you must:

  • Enable the extractor component.

    The extractor is a component that runs as part your Turbonomic installation. The extractor is not enabled by default.

  • Deploy a connector that delivers the extractor's stream to your data service.

    The extractor publishes Turbonomic data as Kafka topics. The connector enables your data service to consume the data topic. This document includes a deployment file for a sample Elasticsearch connector.

Enabling the Extractor component

The first step to enabling the Data Exporter is to enable the extractor component.

Note:

If you have enabled embedded reporting, then the extractor component is already enabled (set to true).

It is possible to enable the Data Exporter without enabling embedded reports, just as it is possible to enable embedded reports without enabling the Data Exporter.

  1. Open an SSH terminal session to your Turbonomic instance.

    Log in with the System Administrator that you set up when you installed Turbonomic:

    • Username: turbo

    • Password: [your_private_password]

  2. Open the cr.yaml file to enable the extractor component.

    vi /opt/turbonomic/kubernetes/operator/deploy/crds/charts_v1alpha1_xl_cr.yaml
  3. Edit the entry for the extractor component.

    Search for the extractor entry in the cr.yaml file.

    extractor:
        enabled: false

    Change the entry to true.

  4. Edit the entry for the extractor properties.

    Search for the properties: extractor entry in the cr.yaml file.

    properties:
        extractor:
          enableDataExtraction: false

    Change the entry to true.

  5. Save and apply your changes to the platform.

    kubectl apply -f /opt/turbonomic/kubernetes/operator/deploy/crds/charts_v1alpha1_xl_cr.yaml
  6. Verify that the extractor component is running.

    Give the platform enough time to restart the components. Then run the command:

    kubectl get pods -n turbonomic 

    You should see output similar to the following:

    NAME                                         READY   STATUS    RESTARTS 
    ...
    extractor-5f41dd61c4-4d6lq                   1/1     Running   0   
    ...
                  

    Look for an entry for the extractor component. If the entry is present, then the extractor component is installed and running.

Deploying a connector

The extractor publishes Turbonomic data as Kafka topics. To load this data into a search and analysis service, you must deploy a connector to that service. For example, you must deploy an Easticsearch connector to load the data into Elasticsearch. You deploy the connector in the same Kubernetes node that runs the Turbonomic platform. Create a Kubernetes Deployment that declares the pods you need for the connector.

To deploy the connector, create a deployment yaml file on the same host that is running the extractor component, and run the command:

kubectl create -f <MyConnectorDeployment.yaml>

Where <MyConnectorDeployment.yaml> is the name of the deployment file.

Assume that the name of the deployed pod is es-kafka-connect. To verify that the connector is running, run the command:

kubectl get pods -n turbonomic

The output is similar to the following example:

NAME                                                READY   STATUS    RESTARTS 
...
es-kafka-connect-5f41dd61c4-4d6lq                   1/1     Running   0   
...

After you deploy the connector, wait for a cycle of Turbonomic analysis (approximately ten minutes). Then you will see the entities and actions from your Turbonomic environment, loaded as JSON in your data service.

Connector deployment example

The following example is a sample deployment of a connector to Elasticsearch that uses Kibana with Elasticsearch to display data dashboards. In this example, you have the following set up:

  • Elasticsearch is deployed to a VM on the network where you are running Turbonomic. The Elasticsearch host is visible from the Turbonomic Kubernetes node. You specify this host address in the connector deployment.

  • An Elasticsearch index is set up to load the Turbonomic data. You specify this index in the connector deployment.

The following listing is a deployment that uses a Logstash image to collect the extractor data and pipe it to the Elasticsearch host. The deployment also sets up storage volumes, configures the input from the extractor, and configures output to the Elasticsearch instance. As you go over the listing, pay attention to the following:

  • The location of the Elasticsearch host and the login credentials.

    ...
            env:
              - name: ES_HOSTS
                value: "<UrlToMyElasticsearchHost>"
              - name: ES_USER
                value: "<MyElasticsearchUser>"
              - name: ES_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: <MyES_KeyName>
                    key: <MyES_Key>
    ...

    Logstash will use the following environment variables:

    • ES_HOSTS: to identify where to pipe the exported data.

    • ES_USER: to identify the user account on Elasticsearch.

    • ES_PASSWORD: for the account login. This connector example assumes that you have stored the Elasticsearch password as a Kubernetes Secret.

  • The name of the Kafka topic.

    ...
      logstash.conf: |
        input {
          kafka {
            topics => ["turbonomic.exporter"]
    ...

    The Logstash input configuration expects a single topic named turbonomic.exporter.

  • The Logstash output configuration is to the Elasticsearch server that is identified by the ES_HOSTS environment variable. You specify your own Elasticsearch index in place of <MyElasticsearchIndex>.

    ...
        output {
          elasticsearch {
            index => "<MyElasticsearchIndex>"
            hosts => [ "${ES_HOSTS}" ]
          }
        }
    ...

Sample Listing: Elasticsearch Connector

This listing is a sample of a deployment file that can work to create an Elasticsearch connector for the Data Exporter. You need to change some settings, such as username and password. You also might need to specify ports and other settings to make the connector comply with your specific environment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch-kafka-connect
  labels:
    app.kubernetes.io/name: elasticsearch-kafka-connect
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: elasticsearch-kafka-connect
  template:
    metadata:
      labels:
        app.kubernetes.io/name: elasticsearch-kafka-connect
    spec:
      containers:
      - name: logstash
        image: docker.elastic.co/logstash/logstash:7.10.1
        ports:
          - containerPort: 25826
        env:
          - name: ES_HOSTS
            value: "<UrlToMyElasticsearchHost>"
          - name: ES_USER
            value: "<MyElasticsearchUser>"
          - name: ES_PASSWORD
            valueFrom:
              secretKeyRef:
                name: <MyES_KeyName>
                key: <MyES_Key>
        resources:
          limits:
            memory: 4Gi
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/logstash/config
          - name: logstash-pipeline-volume
            mountPath: /usr/share/logstash/pipeline
      volumes:
      - name: config-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.yml
              path: logstash.yml
      - name: logstash-pipeline-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.conf
              path: logstash.conf
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-configmap
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    input {
      kafka {
        topics => ["turbonomic.exporter"]
        bootstrap_servers => "kafka:9092"
        client_id => "logstash"
        group_id => "logstash"
        codec => "json"
        type => "json"
        session_timeout_ms => "60000"   # Rebalancing if consumer is found dead
        request_timeout_ms => "70000"   # Resend request after 70 seconds
      }
    }
    filter {
    }
    output {
      elasticsearch {
        index => "<MyElasticsearchIndex>"
        hosts => [ "${ES_HOSTS}" ]
        user => "${ES_USER}"
        password => "${ES_PASSWORD}"
      }
    }
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: elasticsearch-kafka-connect
  name: elasticsearch-kafka-connect
spec:
  ports:
    - name: "25826"
      port: 25826
      targetPort: 25826
  selector:
    app: elasticsearch-kafka-connect