VIDEO – Containerization Explained

By: Sai Vennam

Containerization: How does it work and what are the advantages?

Hey everyone, I’m excited to be back with yet another lightboarding video. This time we’re going to be taking a close look at containers and containerization.

I’ll be using the example of a Node.js application that I want to push into production, and we’ll be using two different form factors—virtual machines (VMs) and containers—to explain the advantages of containerization.

The main points I wanted to hit in this video are the portability of containers, the advantages they offer around scalability, and the increasingly agile DevOps processes that containers facilitate.

I hope you enjoy the video!

Learn more about containers and Kubernetes

Video Transcript

What is containerization?

Hi everyone, my name is Sai Vennam, and I’m a developer advocate with IBM. Today, I want to talk about containerization.

What is containerization?

The history of containerization technology

Whenever I mention containers, most people tend to default to something like Docker or even Kubernetes these days, but container technology has actually been around for quite some time.

It was actually back in 2008 that the Linux kernel introduced C-groups, or control groups, that basically paved the way for all the different container technologies we see today. So that includes Docker but also things like Cloud Foundry as well as Rocket and other container runtimes out there.

An example to illustrate the advantages of containerization

Let’s get started with an example. We’ll say that I’m a developer, and I’ve created a Node.js application, and I want to push it into production. We’ll take two different form factors to kind of explain the advantages of containerization.

An example to illustrate the advantages of containerization

Let’s say that, first, we’ll talk about VMs and then we’ll talk about containers.

we’ll talk about VMs and then we’ll talk about containers.

Virtual machines (VMs) and the Node.js application

So, first things first, let’s introduce some of the things that we’ve got here. So, we’ve got the hardware itself, which is the big box. We’ve got the guest, or rather the host operating system, as well as a hypervisor. The hypervisor is actually what allows us to spin up VMs.

Virtual machines (VMs) and the Node.js application

Already we’ve—and let’s take a look at the shared pool of resources—with the host OS and hypervisor, we can assume that some of these resources have already been consumed.

we can assume that some of these resources have already been consumed.

Spinning up Linux VMs

Next, let’s go ahead and take this .js application and push it in. And to do that, I need a Linux VM. So, let’s go ahead and sketch out that Linux VM. And in this VM, there’s a few things to note here. So, we’ve got another operating system in addition to the host OS—it’s going to be the guest OS. As well as some binaries and libraries.

Spinning up Linux VMs

So that’s one of the things about Linux VMs—even though we’re working with the really lightweight application, to create that Linux VM, we have to put that guest OS in there and a set of binaries and libraries. And so, that really bloats it out. In fact, you know, I think the smallest Node.js that I’ve seen out there is 400+ megabytes. Whereas the Node.js runtime app itself would be under 15.

So, we’ve got that, and we go ahead and let’s push that .js application into it, and just by doing that alone, we’re gonna consume a set of resources.

we’re gonna consume a set of resources.

Scaling out the VM

Next, let’s think about scaling this out, right? So, we’ll create two additional copies of it. And you’ll notice that even though it’s the exact same application, we have to use and deploy that separate guest OS and libraries every time. And so, we’ll do that three times.

Scaling out the VM

By doing that, essentially, we can assume that for this particular hardware, we’ve consumed all of the resources.

we’ve consumed all of the resources.

Containers and Agile DevOps

There’s another thing that I haven’t mentioned here—this .js application, I developed on my MacBook. So, when I pushed it into production to get it going in the VM, I noticed that there were some issues and incompatibilities. This is the kind of foundation for a big “he said, she said” issue, where things might be working on your local machine and work great, but when you try to push it into production, things start to break. This really gets in the way of doing Agile DevOps and continuous integration and delivery.

A three-step process for creating containers

That’s solved when you use something like containers. There’s a three-step process when kind of doing anything container-related and pushing or creating containers. And, it almost always starts with, first, some sort of a manifest. So, something that describes the container itself. So, in the Docker world, this would be something like a Dockerfile, and in Cloud Foundry, this would be a manifest yaml.

A three-step process for creating containers

Next, what you’ll do is create the actual image itself. So, for the image—you know, again, if you’re working with something like Docker, that could be a docker image; if you’re working with Rocket, it would be an ACI or application container image. So, regardless of the different containerization technologies, this process stays the same.

regardless of the different containerization technologies, this process stays the same.

The last thing you end up with is an actual container itself, that contains all of the runtimes and libraries and binaries needed to run an application.

contains all of the runtimes and libraries and binaries needed to run an application.

Components of a container

That application runs on a very similar setup to the VMs, but what we’ve got on this side is, again, a host operating system. The difference here is, instead of a hypervisor, we’re going to have something like a runtime engine. So, if you’re using Docker, this would be the Docker Engine, and, you know, different containerization technologies would have a different engine. Regardless, it’s something that runs those containers.

Components of a container

Again, we’ve got this shared pool of resources; so we can assume that that alone consumes some set of resources.

we can assume that that alone consumes some set of resources.

Containerizing the technology

Next, let’s think about actually containerizing this technology. So we talked about the three-step process—we create a Dockerfile, we build out the image, we push it to a registry, and we have our container—and we can start pushing this out as containers.

The great thing is, these going to be much more lightweight. So, deploying out multiple containers—since you don’t have to worry about a guest OS this time, you really just have the libraries as well as the application itself.

Containerizing the technology

So, we scale that out three times, and because we don’t have to duplicate all of those operating system dependencies and create bloated VMs, we actually will use less resources. So, let’s use a different color here. And, scaling that out three times—we still have a good amount of resources left.

we still have a good amount of resources left.

Taking advantage of a third-party service

Next, let’s say that my coworker decides, hey for this .js application, let’s take advantage of a third party—you know, let’s say a cognitive API to do something like image recognition. So, let’s say that we’ve got our third-party service, and we want to access that using maybe a Python application.

Taking advantage of a third-party service

So, he’s created that service that accesses the third-party APIs, and with our Node.js application, we want to access that Python app to then access that service.

If we wanted to do this in VMs, I’m really tempted to basically create a VM out of both the .js application and the Python application because, essentially, that would allow me to continue to use the VMs that I have.

that would allow me to continue to use the VMs that I have.

But that’s not truly cloud-native, right? Because if I wanted to scale out the .js but not the Python app, I wouldn’t be able to if they’re running in the same VM. So to do it in a truly cloud-native way, essentially, I would have to free up some of these resources—basically, get rid of one of these VMs, and then deploy the Python application in it instead. And, you know, that’s not ideal.

I would have to free up some of these resources—basically, get rid of one of these VMs, and then deploy the Python application in it instead

The container-based approach

But with the container-based approach, what we can do is simply say, since we’re modular, we can say—okay, just deploy one copy of the Python application. So we’ll go ahead and do that in a different color here. And that consumes a little bit more resources.

The container-based approach

Then, you know, with those remaining resources, the great thing about container technology, that actually becomes shared between all the processes running. In fact, another advantage—if these container processes aren’t actually utilizing the CPU or memory, all of those shared resources become accessible for the other containers running within that hardware.

Taking advantage of cloud-native-based architectures with containers

So, with container-based technology, we can truly take advantage of cloud-native-based architectures. So, we talked about things like portability of the containers; talked about how it’s easier to scale them out; and then, overall, with this three-step process in the way we push containers, it allows for more Agile DevOps and continuous integration and delivery.

Be the first to hear about news, product updates, and innovation from IBM Cloud