Kubernetes Operators are quickly picking up traction in the developer community as a great way of managing complex applications on Kubernetes.
Using the Operator Framework, an Operator provides many benefits to users by wrapping the logic for deploying and operating an application using Kubernetes constructs.
In this lightboarding video, I’ll walk you through two different scenarios of deploying an application with and without a Kubernetes Operator.
Make sure you subscribe to the YouTube channel to be notified about more videos like this when they are published, and leave us a comment if you’ve got any questions on what I covered.
Learn more
- Want to get some free, hands-on experience with Kubernetes? Take advantage of interactive, no-cost Kubernetes tutorials by checking out IBM CloudLabs.
- IBM Cloud Kubernetes Service
- What is Kubernetes?
- VIDEO – Kubernetes Explained
- Red Hat OpenShift
- VIDEO – Kubernetes and OpenShift: What’s the Difference?
- VIDEO – What is Helm?
- Full playlist of lightboarding videos
Video Transcript
What are Kubernetes Operators?
Hi everyone, my name is Sai Vennam, and I’m with the IBM Cloud team. Today, we want to talk about Operators.
Now, I’m not actually talking about operations teams but instead the Operator’s Framework, which can be used on Kubernetes or OpenShift. CoreOS actually introduced the Operator Framework back in 2016. CoreOS is now a part of Red Hat and IBM.
Operator Framework is quickly picking up traction as it’s a great way of managing complex Kubernetes applications.
Kubernetes control loop
Now before I jump into this we want to actually introduce what the Kubernetes control loop is because it’s a core part of the Operators Framework.
In this video, we’re going to be talking about things like deployments and pods, so if you’re not familiar with those, be sure to check out the “Kubernetes Explained” video that I’ve done on those topics.
Observe
But let’s get started with exactly what the control loop is in Kubernetes. Now, essentially, the way it starts—the control loop is the core part of Kubernetes—it observes the state of what’s in your actual cluster. So that’s the first step, observe.
Diff
Next what it’s going to do, Kubernetes is going to double-check that the state in the actual cluster versus the state that you want it to be. So, it’s going to do a diff.
Act
And finally, it wants to resolve that diff by acting on it. So the last phase of the control loop is act.
Now the control loop is core to how Kubernetes works and there’s a controller that basically acts on that for every default resource. Kubernetes comes with a number of default resources.
Example of deploying an application without Operators
Let’s see an example of deploying an application without Operators using these default resources. So, as an end user, the first thing you’re going to want to do is write up some YAML, right, the spec for that actual application.
And for our particular example, let’s say that you know we’re doing a deployment, and in this deployment, we’ll have to define some configuration—things like what’s the image, and maybe the replicas, and maybe some other configuration.
So that’s one Kubernetes resource, and essentially, what you would do is take that and deploy it into your Kubernetes cluster, at which point a deployment is made.
Here’s where the control loop kicks in; so Kubernetes observes the state of your cluster—so we’ve got a Kubernetes cluster here—and checks what’s the difference between what you want versus what’s there.
First thing it notices? There are no pods. So it’s gonna act on that difference and it’s gonna create some pods.
Now let’s say for a fairly complex application, we don’t just have one YAML but we have a second YAML—maybe it’s for the backend—and so that deploys in the second deployment, and that in turn deploys a pod using the controllers in the control loop.
Now it’s a simple example, but say you want to go through here, scale up the application, make some changes, set up some secrets, environment variables—every single time you have to either create new Kubernetes resources or go in here and edit the existing ones. That can start to get fairly difficult.
Example of deploying an application with Operators
Now, let’s see how that’s done in a world where we’re using Operators. Now, the first thing you would actually need to do is install the Operator itself. So, someone in your team has to create the Operator or maybe you can use one of the many that are out there on OperatorHub or the community is building on the Operators that are available.
So, the first thing you need in your Kubernetes cluster is the OLM—which is the Operator Lifecycle Manager—which basically manages the Operators that you have installed.
Next, you deploy your actual Operator into the cluster.
The Operator is made up of two major components. The first component and Operator is going to be the CRD. The other one is going to be the controller.
Now, the CRD is basically a custom resource definition; so we’ve talked about default resources—things like deployments and pods—but a custom resource is something that you define as a user in Kubernetes or maybe an Operator defines it so that you can create YAML to work against that custom config.
The controller is basically a custom control loop which runs as a pod in your cluster, and runs this control loop against your custom resource definition.
So, let’s say that an Operator is created for our custom application deployment here—so instead of having to write multiple deployments and setting up config maps and secrets for whatever our cluster needs, we instead, as an end user here, we’ll instead actually just deploy one YAML.
Maybe we called this Operator MyApp—it could be a little bit more meaningful here we can call it stateful app, frontend app, whatever we want it to be. And then we could define some config here, or we can use the defaults that are set—kind of have a choice of options here.
Then, we take this Operator and we deploy it directly into the cluster.
At this point, the Operator takes over, and this is actually responsible for running that control loop and figuring out exactly what needs to be running.
So, it’s going to realize that we need a couple of deployments and the pods.
Now this is a kind of a format or an approach to managing applications that’s inherently easier and scales better than this approach because as an end-user, you really only have to worry about the config that’s been exposed to you, and the Operator itself manages the control loop and the state of the application—how it needs to look.
Custom Operators
Now, there’s great Operators out there already, things like managing etcd or various databases, or even IBM Cloud Services. So all of those Operators currently exist on OperatorHub.
But, say you want to develop your own, maybe a custom Operator for something that is native to your application architecture—kind of like what we sketched out here. Well, there’s a number of ways you can do that, and there’s something called Operator SDK that allows you to start building out Operators yourself.
Now, I’d say the easiest way to get started with an Operator is to use the Helm one. So Helm (as you may already know, there is a video where David Okun goes over exactly how Helm works—be sure to check that one out), but the Helm approach allows you to take a Helm chart and apply that towards an Operator and expose config. So, it allows you to get to a fairly mature level of an Operator—kind of something like this for a chart that’s already there.
Maturity of Operators: Five Levels
Now, the maturity of Operators—what I’ve sketched out down here—falls into five different levels.
Now, Helm actually hits the first two levels of maturity. Let’s talk about what those levels are.
Level 1: Basic install
The first one it’s basic install. Essentially, the first level is basically going to allow you to do just provisioning of the resources required.
Level 2: Upgrades
Now the second phase goes a little bit further—it’s gonna allow you to do upgrades. So this supports minor and patch-version upgrades to whatever is defined in your Operator.
Now, Helm gets you that far, what about for the next three levels of maturity? For these, you’re going want to use either Go, or you can also use Ansible.
Now, these will allow you to actually get to all five levels of maturity with Operators.
Let’s quickly talk about what those are.
Level 3: Full lifecycle support
At the third level, we’ve got full lifecycle support. So this is storage lifecycle, app lifecycle—it’s also going to allow you to do things like backup and failure recovery. So that’s something that would be kind of configured and developed into the Operator, whomever developed that one.
Level 4: Insights
Fourth, what we’ve got here is insights. This is going to allow you to get deep metrics and analysis, logging—that kind of thing from your actual Operator.
Level 5: Auto-Pilot
And finally, what we have is something called auto-pilot, and just kind of how it sounds this is going to have a lot more functionality built into Operator itself. Basically, it’s going to allow you to do automatic scaling—horizontal and vertically. It’s going to do automatic config tuning. If your Operator-based app gets into a bad state, it’s gonna identify that automatically.
So, these are kind of five levels of maturity that Operators can have. By looking on OperatorHub, you can see the ones that the community has developed and see what level of maturity that they hit. And then, again, by using Operator SDK, you can build your own Operators using either Helm, Go, or Ansible.