IBM Cloud Kubernetes Service is a managed Kubernetes offering to deliver powerful management tools, an intuitive user experience, and built-in security and isolation to enable rapid delivery of applications, all while leveraging Cloud Services like cognitive capabilities from Watson. IBM Cloud Kubernetes Service provides native Kubernetes capabilities, such as intelligent scheduling, self-healing, horizontal scaling, service discovery and load balancing, automated rollouts and rollbacks, and secret and configuration management. Additionally, IBM is adding capabilities to the Kubernetes Service, including simplified cluster management, container security and isolation choices, the ability to design your own cluster, the option to leverage other IBM Cloud services (such as Watson) for your cognitive applications, completely native Kubernetes CLI and API, and integrated operational tools or support to bring your own tools to ensure operational consistency with other deployments.
About Splunk
I’m excited to partner with Jeff Wu from Splunk to bring this use case and tutorial to fruition. Splunk helps organizations ask questions, get answers, take actions, and achieve business outcomes from their data. Organizations use market-leading Splunk solutions with machine learning to monitor, investigate, and act on all forms of business, IT, security, and Internet of Things data.
Setting up a Kubernetes cluster in IBM Cloud
One of the value propositions within the IBM Cloud Kubernetes Service is to simplify that cluster creation process, whether you want to click through the UI or automate the deployment using your existing CI/CD tooling with our CLI/APIs. This tutorial will guide you through the first cluster creation process: “Tutorial: Creating Kubernetes clusters.”
Installing Splunk Connect for Kubernetes
In this tutorial, we will install Splunk Connect for Kubernetes into an existing Splunk instance. Splunk Connect for Kubernetes provides a way to import and search your Kubernetes logging, object, and metrics data in Splunk. These instructions are adapted from the README on the Github repository linked above and you can find out more information about this Splunk connector there.
Splunk Connect for Kubernetes deploys a daemonset on each node. And in the daemonset, a Fluentd container runs and does the collecting job. Splunk Connector for Kubernetes collects three types of data:
Logs: Splunk Connector for Kubernetes collects two types of logs
For Splunk Connect for Kubernetes, Splunk uses the node logging agent. See the Kubernetes Logging Architecture for an overview of the types of Kubernetes logs from which you may wish to collect data as well as information on how to set up those logs.
If you’re running Splunk Enterprise on a single node, you can enable HEC tokens through the web UI.
Go to Settings > Data Inputs > HTTP Event Collector. Click Global Settings at the top of the screen and make sure that tokens are enabled:
Next, go to New Token and create a token named splunk-connect-for-k8:
On the next screen, create three indexes for the Splunk Connect for Kubernetes app to write to: iks_logs, iks_meta, and iks_metrics. Make sure to select Metrics as the index type for the iks_metrics index.
Add these indexes to your HEC token.
Setting up a Splunk clustered instance
For a Splunk clustered instance, you can also configure these settings directly on the Splunk instance within the .conf files.
Edit the $SPLUNK_HOME/etc/master-apps/_cluster/local/inputs.conf on a clustered Splunk instance and add the HEC token:
[http]
disabled = 0
[http://splunk-connect-for-k8s]
disabled = 0
token = 00000000-0000-0000-0000-000000000000
indexes = iks_meta,iks_logs,iks_metrics
Scroll to view full table
Edit $SPLUNK_HOME/etc/master-apps/_cluster/local/indexes.conf on the Splunk cluster master and add the indexes:
# Splunk Connect for Kubernetes metadata index
[iks_meta]
homePath = $SPLUNK_DB/iks_meta/db
thawedPath = $SPLUNK_DB/iks_meta/thaweddb
# SmartStore-enabled indexes do not use coldPath, but you must still specify it here.
coldPath = $SPLUNK_DB/iks_meta/colddb
# Splunk Connect for Kubernetes logs index
[iks_logs]
homePath = $SPLUNK_DB/iks_logs/db
thawedPath = $SPLUNK_DB/iks_logs/thaweddb
# SmartStore-enabled indexes do not use coldPath, but you must still specify it here.
coldPath = $SPLUNK_DB/iks_logs/colddb
# Splunk Connect for Kubernetes metrics index
[iks_metrics]
homePath = $SPLUNK_DB/iks_metrics/db
thawedPath = $SPLUNK_DB/iks_metrics/thaweddb
# SmartStore-enabled indexes do not use coldPath, but you must still specify it here.
coldPath = $SPLUNK_DB/iks_metrics/colddb
datatype = metric
Scroll to view full table
Then navigate to Settings > Indexer Clustering. Hit the edit button and select Cluster Bundle Actions:
Then, select Validate and Check Restart:
Now, select Push to send the configuration to the rest of the cluster.
Installing Helm and Tiller
We’ll be using a Helm Chart to install Splunk Connect for Kubernetes, so let’s get that installed if it isn’t already. To get Helm installed take a look at this guide.
Let’s first create a namespace for Splunk if it doesn’t exist already; we’ll install our connector there:
kubectl create ns splunk
Scroll to view full table
Now let’s get Tiller up and running in the cluster. First create a service account for Tiller. Create the following file in your current directory:
This will give Tiller cluster-admin rights and allow it to deploy apps to our cluster. While the topic of Tiller permissions is one you’ll need to balance with security in real-world deploys, for the scope of this demo this should be fine. See Helm Docs for more!
Helm Charts are configured with a values.yaml file.
However, to get started, you can just create a new values.yaml file using the configuration below. Remember to replace the host, port, and token values with your own HEC collector and change the indexNames if you’ve renamed them.
# Global settings
global:
logLevel: info
splunk:
hec:
protocol: https
insecureSSL: true# Change this depending on your certificates.
host: hec# Put the hostname of your HEC endpoint here.
port: 8088
token: 00000000-0000-0000-0000-000000000000
# Local config for logging chart
splunk-kubernetes-logging:
journalLogPath: /run/log/journal
splunk:
hec:
indexName: iks_logs
# Local config for objects chart
splunk-kubernetes-objects:
rbac:
create: true
serviceAccount:
create: true
name: splunk-kubernetes-objects
kubernetes:
insecureSSL: true
objects:
core:
v1:
– name: pods
interval: 30s
– name: namespaces
interval: 30s
– name: nodes
interval: 30s
– name: services
interval: 30s
– name: config_maps
interval: 30s
– name: secrets
interval: 30s
– name: persistent_volumes
interval: 30s
– name: service_accounts
interval: 30s
– name: persistent_volume_claims
interval: 30s
– name: resource_quotas
interval: 30s
– name: component_statuses
interval: 30s
– name: events
mode: watch
apps:
v1:
– name: deployments
interval: 30s
– name: daemon_sets
interval: 30s
– name: replica_sets
interval: 30s
– name: stateful_sets
interval: 30s
splunk:
hec:
indexName: iks_meta
# Local config for metrics chart
splunk-kubernetes-metrics:
rbac:
create: true
serviceAccount:
create: true
name: splunk-kubernetes-metrics
splunk:
hec:
indexName: iks_metrics
Scroll to view full table
With the values all set, let’s install the connector using Helm. To get the latest version of the connector clone the Github repo:
Run make to build the the *.tgz files.
cd splunk-connect-for-kubernetes
make
Scroll to view full table
Now we can install the connector using helm. If you’ve installed Tiller in a different namespace make sure to specify it here:
helm install –namespace splunk \
–tiller-namespace splunk \
–name splunk-connect-k8 \
-f values.yaml \
build/splunk-connect-for-kubernetes-1.0.1.tgz
Scroll to view full table
Checking the installation
To validate that the installation is working, let’s check that the pods are up and running. If you’re not running Splunk in Kubernetes you won’t see any of the other non-highlighted pods:
kubectl -n splunk get pods
Scroll to view full table
You can also check the logs to see if there are any errors:
If you need to delete or restart the installation for any reason, you can do so with this command:
helm del –purge splunk-connect-k8 –tiller-namespace splunk
Scroll to view full table
Checking out your logs and metrics
The iks_logs and iks_meta indexes are event indexes. Search them and build dashboards from the data like you’re used to.
To query the iks_metrics index, use the mstats command or download the Splunk Metrics Workspace(recommended) and install on the search head. Once installed, access via the Metrics tab in the Search & Reporting app:
On the left navigation pane, browse to Metrics > Kube and select some meaningful metrics by clicking on them. There are many metrics to choose from along with different aggregations and filters to apply. Use this view as an ad-hoc query tool or save the workspace as a dashboard. This is an extremely basic example for illustrative purposes and is not meant to be an in-depth tutorial on the Metrics Workspace app.
Awesome, now you’ve got Splunk Connect for Kubernetes installed in your cluster running on IBM Cloud Kubernetes Service!