Speed deployment on Kubernetes with Helm Chart – Quick YAML example from scratch

12 min read

By: Rick Osowski

Speed deployment on Kubernetes with Helm Chart – Quick YAML example from scratch

Are you working with Kubernetes, with all the recent supporting releases on IBM Cloud Private, the IBM Cloud Container Service on IBM Cloud Platform, or elsewhere? Are you in the middle of containerizing workloads across your portfolio? Have you adopted Kubernetes and looking to speed up deployment and reuse? Or are you simply looking to see how far this whole nautical theme is going to go?

logo

Either way, understanding all the tools at your disposal is a critical step to success here. One of those most advantageous tools is Helm and Helm Charts. Many projects took to packaging their releases in Docker containers immediately, but that’s only one step… the runtime. You still need to know how to connect the dots with all the supporting services, peer containers, and operational characteristics of the project itself. Helm Charts allow you to do just that, with strict templates applied to Kubernetes configuration YAML files, providing the ability to build, package, distribute, and deploy complex containerized applications through simple helm CLI commands.

Thinking of singular containers as arithmetic. Helm and Helm Charts are more like applied calculus, allowing you to define robust and repeatable deployment templates for applications and services.

The only problem with something so powerful is learning the fundamentals so you can build your own building blocks. As usual, I returned to previous projects to explore and prove out what Helm Charts are really all about. One of these such projects, built for the initial release of the IBM Cloud Container Service in 2015, is Let’s Chat for BluemixLet’s Chat is an open-source, self-hosted chat application for small teams — similar to Slack — requiring a MongoDB backend… so it’s perfect for some container orchestration exercises!

NOTE: This post is meant to be a learn-by-doing example and is in no way meant to be an exhaustive primer on Helm and Helm Charts. Once you have an understanding of how all the components work together, reading through The Chart Template Developer’s Guide will help you understand all the robustness and intricacies of Chart development.

Building component YAMLs by hand

I’m not going to cover the basics of Kubernetes here, so going forward we’ll assume the knowledge of Pods, Deployments, and Services. Also, the YAMLs and Charts built here are not operationally perfect and are meant to be iteratively improved upon. This post simply covers the educational exercise of understanding the Chart structure, building some from hand, and deploying into a Kubernetes cluster. They are publicly available on GitHub and a Gist.

Looking at our Let’s Chat sample project, there are two major components – the Node.js application and the MongoDB backend. This naturally maps to two distinct deployments inside of Kubernetes – a Node.js deployment and a MongoDB deployment, exposed internally to the cluster via a Service definition. I manually built the Kubernetes YAML below, naming it lets-chat-mongo.yaml, to deploy a simple MongoDB instance to my Kubernetes cluster.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: lets-chat-mongo
  labels:
    app: lets-chat
    tier: backend
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: lets-chat
        tier: backend
    spec:
      containers:
      - name: lets-chat-mongo
        image: "mongo:3.5.11"
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    tier: backend
spec:
  type: ClusterIP
  selector:
    app: lets-chat
    tier: backend
  ports:
  - port: 27017
    targetPort: 27017
    protocol: TCP
    name: lets-chat-mongo-port

Deploying this with the kubectl create -f lets-chat-mongo.yaml command will create the deployment and expose it via a service construct. You can verify the status of these deployments via kubectl get pods and kubectl get svc. Note the metadata.name of the Service, as this becomes the cluster-routable name of the service, as supported by KubeDNS.

Next, I created a similar YAML for the application code, named lets-chat-app.yaml, that would connect to this MongoDB backend. Note the env section that provides the default lookup value for the remote MongoDB backend in the form of an injected environment variable inside the running container. This is the default value provided in the Let’s Chat Node.js application runtime, but exposing it here allows for flexibility to any MongoDB-service at deploy time. We will revisit this later when we create the Chart files.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: lets-chat-app
  labels:
    app: lets-chat
    tier: frontend
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: lets-chat
        tier: frontend
    spec:
      containers:
      - name: lets-chat-app
        image: "sdelements/lets-chat:latest"
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 8080
        env:
        - name: LCB_DATABASE_URI
          value: mongodb://mongo/letschat
---
apiVersion: v1
kind: Service
metadata:
  name: lets-chat-app
  labels:
    app: lets-chat
    tier: frontend
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    protocol: TCP
    name: lets-chat-app-port
  selector:
    app: lets-chat
    tier: frontend

Again, this is deployed via the kubectl create -f lets-chat-app.yaml command. Once the pods are created and running, you should be able to access the application in a browser, either through the Kube Proxy or going directly to the exposed NodePort endpoints. The application is available through http://node-hostname:8080 and should bring up a login page. Feel free to create an account or two and play around with it!

Depending on your environment (IBM Cloud Private, IBM Cloud Container Service, minikube, etc) you’ll need to determine how you can access your deployed applications via the NodePort-exposed service. Refer to your individual platform’s documentation to determine the correct host and port combination for opening Let’s Chat in your browser.

Introduction to our Charts

Now that the individual Kubernetes YAML files are defined, we need to create the Chart files and package them up into the Chart hierarchy. Similar to many other current web application frameworks and tools, the helm CLI provides the ability to scaffold an empty chart project, removing the need for you to build the entirety of the directory structure yourself. As such, running the two following commands will create the charts necessary for us to deploy our application as expected:

helm create mongo
helm create lets-chat

In this blog post, I work with the files that are critical to get a chart up and running as quickly as possible. To get a more thorough understanding off all the elements of a Chart package, as well as templating and inheritance tips and tricks, you’ll want to read the Developing Charts documentation from the official Helm site.

Starting with the backend component first, I parameterized the deployment and service YAMLs that we previously deployed manually via Kubectl with the neccesary template language components for an operable chart. The key elements here are:

  • mongo/Chart.yaml

  • mongo/values.yaml

  • mongo/templates/deployment.yaml

  • mongo/templates/service.yaml

First, mongo/Chart.yaml defines the general structure of the Chart with all its associated metadata. Most of the lines in the Chart.yaml file are pretty self-explanatory and are reminiscent of most things you’d find in a Node module’s package.json:

apiVersion: v1
description: A Helm chart for Kubernetes that creates a PoC-level deployment of Mongo.
name: mongo
version: 0.1.0
sources:
  - https://hub.docker.com/r/library/mongo/
maintainers:
  - name: Rick Osowski
    email: rosowski@gmail.com

Next, mongo/values.yaml is a very powerful file that you can provide all the necessary defaults for any templatized parameter you may want in your application deployment templates. The values.yaml provides a lot of flexibility to both Chart authors and consumers, allowing for value-scoping and inheritance inside packaged charts (much as you’re accustomed to inside object-oriented programming models).

The key thing to remember from values.yaml is that if you want to externalize values in your template, the first place you’ll want to put default values is in this file. Even if you don’t think it should ever be changed, but an end-user may want to change it at some point… save yourself some time and future GitHub issues by exposing the value here.

# Default values for mongo.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
  repository: mongo
  tag: 3.5.11
  pullPolicy: IfNotPresent
service:
  name: mongo
  tier: backend
  type: ClusterIP
  protocol: TCP
  externalPort: 27017
  internalPort: 27017
resources:
  limits:
    memory: 512Mi
  requests:
    memory: 128Mi

Next, adapting our previous YAMLs to the newer template style is actually quite easy. If you’re familiar with any sort of templating language, this should all come pretty easy, but if not, just remember to use the dot syntax (.Values.A.B.C) to navigate your default values in values.yaml and you shouldn’t have a problem.

For our Mongo database service, the default generated templates/deployment.yaml is actually pretty good to go. In an effort to understand more of the auto-generated code while walking through this, I usually modify something to make sure I understand the flow of bits. In the templates/deployment.yaml below, I have added a new tier label to the deployment’s metadata for more flexibility in deployment and lookup.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ template "fullname" . }}
  labels:
    chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      labels:
        app: {{ template "fullname" . }}
        tier: {{ .Values.service.tier }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        ports:
        - name: http
          containerPort: {{ .Values.service.internalPort }}
        resources:
{{ toYaml .Values.resources | indent 10 }}

Similarly, the same goes for the auto-generated templates/service.yaml. The only change here is again the new tier label that will pull it’s value from the values.yaml to accordingly select the correct deployment to expose. An important configuration point is here the spec.type field which gets populated from the .Values.service.type field in the values.yaml file. This will configure the service to be a ClusterIP-typed service, so Mongo will still only be available to other services inside the cluster. If you wanted to expose Mongo outside the cluster, this value could be overridden by deployers of your Mongo chart and be able to take advantage of the individual Kubernetes platform provider’s networking capabilities.

apiVersion: v1
kind: Service
metadata:
  name: {{ template "fullname" . }}
  labels:
    chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
  type: {{ .Values.service.type }}
  ports:
  - port: {{ .Values.service.externalPort }}
    targetPort: {{ .Values.service.internalPort }}
    protocol: TCP
    name: {{ .Values.service.name }}
  selector:
    app: {{ template "fullname" . }}
    tier: {{ .Values.service.tier }}

Moving onto our frontend component, much of the charting is the same as the backend. Remember that we added an env field when running our Let’s Chat application container previously to link the remote MongoDB container to the running application. We will provide the ability to dynamically specify that during runtime through exposing the mongo.host value in the lets-chat/values.yaml file, as seen below.

# Default values for lets-chat.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
  repository: sdelements/lets-chat
  tag: latest
  pullPolicy: IfNotPresent
service:
  name: lets-chat
  tier: frontend
  type: NodePort
  protocol: TCP
  internalPort: 8080
  externalPort: 8080
mongo:
  host: mongo-asdf
resources:
  limits:
    memory: 512Mi
  requests:
    memory: 128Mi

In addition to our tier label being added to the auto-generated deployment template, we have added an env field inside the containers spec which allows us to control where the Let’s Chat application will try to lookup it’s supporting Mongo backed. This value is again configurable via the values.yaml file and should be overwritten at runtime by Chart deployers. There are many ways to handle this type of configuration, but the env pattern is the simplest one for this walkthrough. We discuss some other options for this type of configuration in the conclusion of this post.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ template "fullname" . }}
  labels:
    chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      labels:
        app: {{ template "fullname" . }}
        tier: {{ .Values.service.tier }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        ports:
        - name: http
          containerPort: {{ .Values.service.internalPort }}
        env:
        - name: LCB_DATABASE_URI
          value: mongodb://{{ .Values.mongo.host }}/letschat
        resources:
{{ toYaml .Values.resources | indent 10 }}

The lets-chat/templates/service.yaml is exactly the same as it’s mongo counterpart, however, the type set in values.yaml will make the Let’s Chat service be created as a NodePort service, which does expose the service to outside traffic (similar to how we manually deployed these above).

Deploying our Charts using Helm

Now that we’ve got all of our Chart elements created, it is time to package and deploy them! As part of the scaffolding & package management capabilities of the Helm CLI, it provides a package and lint command to both respectively archive (for distribution and installation purposes) and validate the Charts you’ve built so far.

To get our charts up and running, perform the following steps and we should be able to see our Let’s Chat application up and running again, however this time deployed via Helm and our Charts, instead of direct from Docker images and Kubernetes YAMLs.

Deploying the Mongo Chart

  • helm package mongo

  • helm install mongo --name blog-backend

We use the --name parameter here to name the instance of the Chart deployment we want to use. This will be used as a prefix in all Services and Deployments that get created by the Chart. This is important as we want to lookup the mongo service name based on KubeDNS, so having some sort of deterministic naming scheme is best.

$ helm install mongo --name blog-backend
NAME:   blog-backend
LAST DEPLOYED: Wed Oct  4 14:21:01 2017
NAMESPACE: default
STATUS: DEPLOYED
table

Note: Get the application URL by running these commands:

export POD_NAME=$(kubectl get pods --namespace default -l "app=blog-backend-mongo" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:27017

Deploying the Let’s Chat Chart

  • helm package lets-chat

  • helm install lets-chat --name blog-frontend --set mongo.host=blog-backend-mongo

Again, we use the --name parameter here to name the instance of the Chart deployment we want to use. We also provide an overriding value to tell our Let’s Chat container where to find the backing MongoDB instance. The blog-backend-mongo value is the name of the Kubernetes service that gets deployed via our Mongo Chart and is accesible through KubeDNS. This value overrides the default mongo.host value in the lets-chat/values.yaml file inside the Let’s Chat Chart.

$ helm install lets-chat --name blog-frontend --set mongo.host=blog-backend-mongo
NAME:   blog-frontend
LAST DEPLOYED: Wed Oct  4 14:21:07 2017
NAMESPACE: default
STATUS: DEPLOYED
table

 

Note: Get the application URL by running these commands:

export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services blog-frontend-lets-chat)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT

 

Accessing the application in a browser

Upon deployment of the Let’s Chat chart, you can follow the instructions at the end of the CLI output to get the accessible NODE_PORT combination to access your running application in a browser.

$ export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services blog-frontend-lets-chat)
$ export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
$ echo http://$NODE_IP:$NODE_PORT
http://169.46.44.169:30495
Accessing the application in a browser

TADA! You’ve now built and deployed your very first Helm Chart in no time at all.

There’s always more work to do!

As mentioned above, this is a quick and dirty example that is meant to be an exemplar learn-by-doing experience. There are numerous aspects, both between the initial Kubernetes YAMLs and the developed charts, that are not necessarily “production-ready” and should be addressed. Some of these things include, but most certainly are not limited to:

  • Deploying Mongo with PersistentVolumeClaims to enable persistent storage

  • Deploying Mongo inside of a StatefulSet to allow for highly-available and performant container-based databases

  • Deploying Let’s Chat with attached ConfigMaps instead of using env values

  • Creating an “umbrella” chart that contains both the Let’s Chat and Mongo charts as child-required charts, allowing for complete deployment via a single chart install

  • Push your Charts to a Chart Repository and deploy them via IBM Cloud Private’s App Center

Learn more IBM Cloud Private or if you prefer a hands-on experience Take the Guided Tour.

Coincidentally, Paul Czarkowski, a Technical Lead for IBM Cloud Developer Labs, was putting a blog post together on the same topic at the same time I was writing this one. If you want a deeper learn-by-doing example and don’t want to go straight to the Helm docs yet, be sure to check out his blog post over Medium.

And as always, once you are comfortable with building, packaging, and deploying your Charts, it is always a good idea to circle back and read through The Chart Best Practices Guide to understand everything that you may or may not already be doing (or shouldn’t be doing!) before distributing your awesome charts to the masses!

Whether you’re using Kubernetes on IBM Cloud Private, the IBM Cloud Container Service, or locally via a Vagrantfile, leveraging Helm Charts to build and deploy complex applications is a powerful tool and valuable timesaver for development of all sorts!

Be the first to hear about news, product updates, and innovation from IBM Cloud