Container orchestration and how it will make your life easier
Hey everyone, I’m excited to be back with another lightboarding video—this time we’re going to be taking a look at container orchestration. In the past, we’ve talked about containerization technology and dove into Kubernetes as an orchestration platform, but we’re going to take a step back to look at why container orchestration is necessary and the benefits it brings to both developers and operations teams.
I’m going to focus on four main elements:
I hope you enjoy the video, and make sure to check out the links below for our lightboarding videos on Kubernetes, containerization, Istio, etc. to get a full picture of all the concepts involved.
Learn more about containers and Kubernetes
Full IBM Cloud YouTube lightboarding video playlist here
What is container orchestration and why is it necessary?
Hi everyone, my name is Sai Vennam, and I’m with the IBM Cloud team. Today, we want to talk about container orchestration.
I know that in the past, we’ve talked about containerization technology—as well as dived into Kubernetes as an orchestration platform—but, let’s take a step back and talk about why container orchestration was necessary in the first place.
An example using three containerized microservices
We’ll start with an example. Let’s say that we’ve got three different microservices that have already been containerized—we’ve got the frontend, we’ll have the backend, as well as a database access service.
These three services will be working together and are also exposed to end users so they can access that application.
The developer has a very focused look at this layout, right? So, they’re thinking about the end user—the end user accessing that frontend application, that frontend, which relies on the backend, which may (in turn) store things using the database service. The developer is focused entirely on this layer.
The orchestration layer
Underneath it, we’ve got an orchestration layer. So, we can call that a master, and I’m thinking about Kubernetes right now, where you would have something like a master node that manages the various applications running on your computer resources.
But, again, you know a developer has a very singular focus to look at this layout and they’re really only looking at this stack right here. They’re thinking about the specific containers and what’s happening within them.
Looking at the individual containers from a developer’s POV
Within those containers, there’s a few key things—so, there’s going to be the application itself, there’s also going to be things like the operating system, as well as dependencies, and it’s going to be a number of other things that you define—but all of those things are contained within those containers themselves.
The operations team’s POV
An operations team has a much larger view of the world. They’re looking at the entire stack. So, an operations team—there’s a number of things that they need to focus on, but we’ll use this side to kind of explain how they work with deploying an application that is made up of multiple services.
Deploying an application
So, first, we’ll talk about deploying.
Taking a look here, it’s very similar to over here, but the key difference is there is no longer containers, but the actual computer resource. This can be things like VMs or, in the Kubernetes world, we call these worker nodes. So each one of these would be an actual compute worker node. So, you know, it could be something like 4vCPUs (virtual CPUs) with 8 GB of RAM per each one of these different boxes that we have laid out here.
The first thing you would use an orchestration platform to do is something simple—just deploying out an application. Let’s say that we start with a single node—and, again, here we’ve got the master. On that single node, we’ll deploy three different microservices—one instance each.
So, we’ll start with the front end, we’ll have the backend, as well as the database access service. Already, let’s assume that, you know, we’ve consumed a good bit of the compute resources that are available on that worker node.
Scaling an application
So, we realize—let’s add additional worker nodes to our master and start scheduling out and scaling our application. So, that’s the next piece of the puzzle. The next thing an orchestration platform cares about is scaling an app out.
So, let’s say that we want to scale out the frontend twice. The backend, we’ll scale it out three times. And the database access service, let’s say we scale this one out three times as well.
An orchestration platform will schedule out our different microservices and containers to make sure that we utilize the computer resource in the best possible way. One of the key things that an orchestration platform does is scheduling.
Next, we need to talk about networking and how we enable other people to access those services. That’s the third thing that we can do with an orchestration platform. So, that includes creating things like services that represent each of our individual containers.
The problem without having something like an orchestration platform take care of this for you—you would have to create your own load balancers. In addition, you would have to manage your own services and service discovery, as well. So, by that, basically I mean—if these services need to talk to one another, they’re not gonna try to find the IP addresses of each different container and resolve those and see if they’re running. That’s something the orchestration platform needs to do—is handle that system around it.
So, with this, we have the ability to expose singular points of access for each of those services. And again, very similarly, an end user might access that frontend application—so the orchestration platform would expose that service to the world while keeping these services internal—and the frontend can access the backend, and the backend can access that database. Let’s say that that’s the third thing that an orchestration platform will do for you.
The last thing I want to highlight here is insight. Insight is very important when working with an application in production.
So, you know, developers are focused on the applications themselves, but let’s say that one of these pods accidentally goes down, right? What the orchestration platform will do is it’ll rapidly bring up another one and bring it within the purview of that service. It’ll do that for you automatically.
In addition, an orchestration platform has a number of pluggable points where you can use key open source technologies—things like Prometheus and Istio—to plug in directly into the platform and expose capabilities that let you do things like a logging, analytics, and, you know, there’s even a cool one (something that I want to sketch out here)—the ability to see the entire service mesh.
Seeing the entire service mesh
Many times, you might want to lay out all of the different microservices that you have and see how they communicate with one another. In this example, it’s fairly straightforward, but let’s go through the exercise anyway.
So, we’ve got our end user; and the end user would likely be accessing the frontend application. And, we’ve got the two other services as well—the database as well as the backend.
In this particular example, I’ll admit, we have a very simple service mesh—we’ve only got three services. But seeing a look at how they communicate with one another can still be very valuable. So, the user accesses the frontend, the frontend accesses the backend, and we expect the backend to access the database.
But, let’s say the operations team finds that, oh actually, sometimes the frontend is directly accessing the database service. They can see how often, as well. With things like a service mesh, you get insight into things like the operations per second.
Let’s say that every time—or let’s say there are five operations per second hitting the frontend, maybe eight that go to the backend, maybe three that go per second to the database service, but then .5 requests per second going from the frontend to the database service. The operations team has identified, by taking a look at the requests and tracing them through the different services, that here’s where the issue is.
This is a simple example about how you can use something out like Istio and Kiali (which is a key service-meshing capability) to see insight into running services.
Orchestration platforms have a number of capabilities that they need to support, and this why operations teams and these roles that we’re seeing pop up—things like SREs, site reliability engineers—and we’re seeing the growth of those roles because there’s a lot of things that they need to concern themselves with when running an application in production. Developers see a very singular view of the world, where they’re focusing on the things within the containers themselves.