What is Kubernetes?

Hey everyone, I’m excited to be back with another lightboarding video that is going to cover all things Kubernetes.

If you’ve watched our “Kubernetes vs. Docker: It’s Not an Either/Or Question” video, you know that Kubernetes is an orchestration tool that allows you to run and manage your container-based workloads. In this video, I take a high-level look at a reference architecture of managed Kubernetes services and dive a little bit deeper about how you would do a deployment of your microservices.

Whether you’re interested in Kubernetes clusters, pods, deployments, services, masters, worker nodes, API servers, kubelet, yaml manifests, pods, kubectl, cluster IPs, or load balancers (whew, that’s a lot!), I’ve got you covered in the video below.

Learn more about Kubernetes and containers:

Video Transcript

Kubernetes Explained

Hi everyone my name is Sai Vennam, and I’m a developer advocate with IBM. Today, I’m back with another video where I’m going to be talking about all things Kubernetes. 

Kubernetes architecture: Master and API server

Kubernetes as an orchestration tool allowing you to run and manage your container-based workloads. Today, I want to take a high-level look at a reference architecture of managed Kubernetes services and dive a little bit deeper about how you would do a deployment of your microservices. 

Let’s get started here. So, we’ve got here, sketched out, kind of two sides of the puzzle here. On the left side here, we’ve got the cloud side, and what we’ve got here is a very important component that’s going to be the Kubernetes master. 

The Kubernetes master has a lot of important components in it, but the most important piece that we want to talk about today is going to be the API server. The Kubernetes API server running on the master is integral to running all of your workloads and exposes a set of capabilities, allowing us to define exactly how we want to run our workloads.  

Kubernetes architecture: Worker node and kubelet

On the right side here, on the customer-managed side, we’ve got our worker nodes, which are all also Kubernetes-based. There’s one major component that I want to point out running on every single Kubernetes worker node, and that’s going to be the kubelet. The kubelet, essentially, is responsible for scheduling and making sure apps are healthy and running within our worker nodes. You can imagine that the master and the kubelet are going to be working together quite often. 

Scaling out and deploying to a cluster

Let’s take a step back—why would someone want to start using Kubernetes? Well, maybe they have some microservices that make up a cloud-native application. You know, as we all know, microservices are talking to each other over the network. 

To really simplify this example let’s say we’ve got a frontend and a backend, and those are the two components that we want to scale out and deploy to the cluster today. 

Yaml

So, Kubernetes uses yaml to kind of define the resources that are sent to the API server which end up creating the actual application. So, let’s get started with that by sketching out a simple yaml for deploying a pod—a really small logical unit allowing you to run a simple container in a worker node. 

So we’ll start with that. Let’s say we’ve got a pod, and what we need with that is an image that’s associated with it. Let’s say that, you know, it’s a container, we’ve already pushed up to Docker Hub, and we’ll use my registry for this one. 

And, very simply, let’s say the name of the application is just “f” for frontend—version 1. And one more thing that we want to add here, let’s just say we’ve got labels. Labels are very important, and we’ll talk about why in a second here, but it’ll allow us to define exactly what the type of artifact we’ve got here is. So for the labels, we’ll just say the app is f for frontend. 

Using kubectl to deploy the simple manifest

Alright, so we’ve got that created, and what we want to do is push it through our process to get into a worker node. What we’ve got here is kubectl. Kube-cuttle—I’ve have heard different ways of pronouncing that. But, using that, we’re gonna be able to deploy the simple manifest that we’ve got and have it in one of our worker nodes. 

So we’ll push the manifest through kubectl, it hits the API running on the Kubernetes master, and that, in turn, is going to go and talk to one of the kubelets—because we just want to deploy one of these pods and start it up. 

So, taking a look, let’s say that it starts it up in our first worker node here with the label that we’ve given it—app is frontend. And one thing to note here—it actually does get an IP address as well. Let’s say we get an internal IP address that ends in a .1. So, at this point, I could SSH into to any of the worker nodes and use that IP address to hit that application. 

Kubernetes deployments and the desired state

So, that’s great for deploying a simple application; let’s take it a step further. Kubernetes has an abstraction called deployments, allowing us to do something and create something called the desired state. So, we can define the number of replicas we want for that pod, and if something were to happen to the pod and it dies, it would create a new one for us. 

So, we’ve got the pod labeled as app is frontend, and we want to say that we wanted to create, maybe, three replicas of that. So, going back to our manifest here.—one thing we need to do is tell Kubernetes that we don’t want a pod, we want template for a pod, right? So, we’ll scratch that out, and we’ll create a… say that this is a template for a pod. 

On top of that, we’ve got a few other things that we want, right? So, the number of replicas—let’s say we want three. We’ve got a selector, right? So we want to tell this deployment to manage any application deployed with that kind of name here. We’ll say match that selector here. 

Again, this is not entirely valid yaml—just want give you an idea of the kind of artifacts that Kubernetes is looking for. 

The last thing that we’ve got here is, what kind of artifact is this? And this is gonna be a deployment. 

Kubernetes manages the desired state

Alright, so we’ve scratched out that pod and we’ve got a new manifest here. What it’s going to do—we’re gonna push it through kubectl, it hits the API server. Now it’s not an ephemeral kind of object—Kubernetes needs to manage the desired state—so what is going to do is, it’s going to manage that deployment for as long as we have that deployment and we don’t delete it. It’s going to manage that here. 

So we’ll say that it creates a deployment, and since we’ve got three replicas, it’s always going to ensure that we’ve got three running. 

As soon as we’ve got the deployment created, and we realize—hey something’s wrong, we’ve only got one, we need two more. What it’s going to do is it’s going to schedule out deploying that application wherever it has resources. 

We’ve got a lot of resources still—most of these worker nodes are empty, so it decides to put one in each of the different nodes. 

So, we’ve got the deployment created, and let’s just say we do the same thing for our backend here. So, we’ll create another application deployment—application is backend. After this one, let’s just scale it out two times. So we’ll go here—application as backend. And everyone’s happy. 

Communication between services

Now we need to start thinking about communication between these services, right? We talked about how every pod has an IP address, but we also mentioned some of these pods might die—maybe you’ll have to update them at some point. When a pod goes away and comes back it actually has a different IP address. 

So, if we want to access one of those pods from the backend or even external users, we need an IP address that we can rely on. And this is a problem that’s been around for a while, and service registry and service discovery capabilities were created to solve exactly that. That comes built in into Kubernetes. 

So, what we’re gonna do now is create a service to actually create a more stable IP address so we can access our pods as a singular app, rather than individual different services. 

So to do that, we’re gonna take a step back here, and we’re going to create a service definition around those three pods.

To do that, we’re going to need some more manifest yaml. 

So we’ll go back here and create a new section in our file. This time we’ve got a kind: service. And we’re going to need a selector on that. Again, that’s gonna match the label that we’ve got here. And, the last thing that we need here is a type—so how do we want to actually expose this. But we’ll get to that in a second—by default, that type is going to be cluster IP, meaning our service can be accessed from inside the cluster. 

So, deploying that through kubectl, it hits our master, goes over here, and creates that abstraction we talked about. We can say that we created another one for the backend as well.

Cluster IP

So, what we get now is a cluster IP. Let’s just say Cl. IP for short—and that’s going to be an internal IP. Say, it ends in a 5. And then another cluster IP for our other service here. And, we’ll say that ends in .6. 

So, now we have an IP that we can use to reliably do communication between the services. In addition, the KubeDNS service, which is usually running by default, will make it even easier for these services to access each other—they can just use their names. So they could hit each other using the name “frontend,” “backend,” or just “F” or “B” for short. 

So, we’ve got that and we talked about how now the services can talk to each other, you know, by using these cluster IPs. So, communication within the clusters kind of solved. 

Exposing the frontend to users

How about when we want to start exposing our frontend to our end users? To do that, what we’ll need to do is define a type of this service, and what we want is a load balancer. 

There’s actually other ways to expose, like node ports as well, but a load balancer, essentially what it’s going to do, where this is internal to the actual Kubernetes worker nodes, we can create an external IP now. And this might be, you know, let’s say a 169 address. And now what we can do is actually expose that directly to end users so that they can access that frontend by directly using that service. 

We’ve talked about three major components here today. We’ve got pods. Pods which are then deployed and managed by deployments. And then, facilitating access to those pods created by those deployments using services. 

Those are the three major components working together with the Kubernetes master and all the worker notes to allow you to really redefine your DevOps workflow for deploying your applications into a managed Kubernetes service. 

Categories

More from Cloud

IBM Cloud inactive identities: Ideas for automated processing

4 min read - Regular cleanup is part of all account administration and security best practices, not just for cloud environments. In our blog post on identifying inactive identities, we looked at the APIs offered by IBM Cloud Identity and Access Management (IAM) and how to utilize them to obtain details on IAM identities and API keys. Some readers provided feedback and asked on how to proceed and act on identified inactive identities. In response, we are going lay out possible steps to take.…

IBM Cloud VMware as a Service introduces multitenant as a new, cost-efficient consumption model

4 min read - Businesses often struggle with ongoing operational needs like monitoring, patching and maintenance of their VMware infrastructure or the added concerns over capacity management. At the same time, cost efficiency and control are very important. Not all workloads have identical needs and different business applications have variable requirements. For example, production applications and regulated workloads may require strong isolation, but development/testing, training environments, disaster recovery sites or other applications may have lower availability requirements or they can be ephemeral in nature,…

IBM accelerates enterprise AI for clients with new capabilities on IBM Z

5 min read - Today, we are excited to unveil a new suite of AI offerings for IBM Z that are designed to help clients improve business outcomes by speeding the implementation of enterprise AI on IBM Z across a wide variety of use cases and industries. We are bringing artificial intelligence (AI) to emerging use cases that our clients (like Swiss insurance provider La Mobilière) have begun exploring, such as enhancing the accuracy of insurance policy recommendations, increasing the accuracy and timeliness of…

IBM NS1 Connect: How IBM is delivering network connectivity with premium DNS offerings

4 min read - For most enterprises, how their users access applications and data is an essential part of doing business, and how they service those application and data responses has a direct correlation to revenue generation.    According to We Are Social’s Digital 2023 Global Overview Report, there are 5.19 billion people around the world using the internet in 2023. There’s an imperative need for businesses to trust their networks to deliver meaningful content to address customer needs.  So how responsive is the…