October 16, 2018 By Sai Vennam 7 min read

Kubernetes vs. Docker: It’s Not an Either/Or Question

One of the most common questions developers seem to ask is whether they should be using Docker vs. Kubernetes. Most people have a working knowledge of Docker—it’s really easy to get started and is a great tool for containerization, managing deployments, and speeding up development. Most Docker users have heard about Kubernetes, but may be hesitant to move over to a new technology, especially due to the steeper learning curve.

Although a common misconception, Kubernetes and Docker are not opposing technologies—they actually complement one another. Moving to scale with Docker alone poses many challenges; Kubernetes tackles those challenges that emerge with large Docker-based deployments. If you’re already working with Docker, Kubernetes is a logical next step for managing your workloads. In this video, we’ll outline the key advantages of Docker and Kubernetes when used together.

Learn more about Kubernetes and containers

Video Transcript

Hi everyone, my name is Sai Vennam, and I’m a Developer Advocate with IBM. Here at IBM, we’re always enabling developers to be able to use the latest and greatest technologies when developing the applications. But, a question I almost always seem to be running into is whether or not you should use Docker vs. Kubernetes.

Kubernetes vs. Docker: It’s not actually a competition

I think there’s a small misconception out there that you have to be using one or the other—the fact is Kubernetes allows you to use your existing Docker containers and workloads but allows you to tackle some of the complexity issues you run into when moving to scale.

Starting with a cloud-native application

To better answer this question, let’s start with a simple cloud native-application sketched out up here and let’s just say that the front end of this application is something that we wrote with React, backed by Node.js. We’ll say that this database access application—I’m a fan of using Java for database access—so we’ll say Java up here. And for accessing external APIs, maybe we use Python, on maybe a flask application that allows us to serve rest endpoints.

Using a Docker approach to deploying an application

Now, putting on my hat as a Docker ops engineer using a purely Docker approach to deploying an application, let’s take this app and move over to a sample server stack that we have sketched out over here.

On every server stack, you’re going have the basics, right? So, we’ll have the hardware. We’ll have the OS, which is generally going to be Ubuntu when you’re working with Docker. And we’ll have the Docker daemon installed on top of that OS—that’s what allows us to spin up containers.

Spinning up containers

So Docker actually provides as a number of great tools for working with our containerized applications. Once we take these applications, create new Docker containers out of them—we’ll do Docker build, Docker push up to a registry—and then SSH into our stack and do Docker run commands or even use Docker Compose to spin up our containers.

So, let’s take a look at what that would look like. We’ve got our .js application, we’ve got our Java app, as well as the Python application.

Scaling out individual pieces

And let’s go ahead and scale out these individual pieces as well to take advantage of all the resources we have. So we’ll scale them out . . . And we can do this as many times as we want, but let’s assume that we scale them out twice for now to make effective use of all the resources that we have available.

So using Docker and the tools that Docker makes available, a simple deployment is very easy. But, let’s imagine that our application starts to get a lot more load—a lot more people are hitting it, and we realize we need to scale out to be able to provide a better user experience.

So as an ops engineer, my first instinct might be: hey, I’ve already got scripts to make this stack, let’s just simply get new hardware and do that exact same deployment multiple times. This can fall apart for many reasons when you start moving to scale. For example, what if your dev team has to create a new microservice to support a new requirement—where do we piece those in, especially if you already have effective use of the hardware? The ops engineer would have to find that out. And, in addition, a big advantage of microservice-based applications is being able to scale out individual components individually, so that’s another thing that the ops engineer would have to write scripts for and find the most effective way to scale things out in response to load to identify and address user experience issues when moving to scale.

Kubernetes: An orchestration tool for Dockerized applications

So, this is where an orchestration tool comes in—something like Kubernetes, which is going to allow you to use your existing Dockerized applications but orchestrate them and make more effective use of your servers and space.

So, what we have sketched out down here is a number of boxes which represent a server stack; but in the Kubernetes land, we call them worker nodes. So, we’re going to have Kubernetes installed on every single one of these worker nodes, and the main one is going to be the master node (whereas the other ones are workers).

This master node is actually connected all the worker nodes and decides where to host our applications (our Docker containers), how to piece them together, and even manages orchestrating them—starting, stopping, updates, that kind of thing.

Advantages of Kubernetes: Deployment, development, and monitoring

I’d say there are three major advantages that Kubernetes provides that I want to walk through: deployment, making development easier, and providing monitoring tools.

Deployment

 

The first step, as expected, is going to be deployment. So, coming back to our application architecture—let’s say we want to deploy that React app about eight times. We’ll say we want eight instances—each of them let’s say we expect to consume about 128 megabytes, and then we can actually specify some other parameters in there as well; policies like when to restart, that kind of thing. And when we box that up, what we get is a Kubernetes deployment.

A Kubernetes deployment is not a one-time thing, but it’s something that grows and lives and breathes with the application and our full stack. So for example, of the React app happens to crash, Kubernetes will automatically restart it to get back to that state that we’ve identified when we first created that deployment. A deployment is always growing and always living with our application.

So, I think we can effectively say that it’s made deployment—in addition to scaling—easier.

Development

Let’s talk about development. You might be wondering, once we’ve created the deployments for each of these individual services and scaled all of them out, we have lots of different microservices out there with different endpoints. For example, if our frontend needs to access the database, there might be eight different versions of that Java app that talk to that database, we have to talk to one of them to get our kind of request fulfilled, right?

So what Kubernetes does is deploy load balancers for all of our microservices that we scaled out, and in addition, takes advantage of service registry and discovery capabilities to allow our applications talk to each other using something called a Kubernetes service. So for each of these, Kubernetes will also create a service, which we can simply label service A, B, and C.

Obviously, you can have more meaningful names for those as well, but very simply, these applications can now speak to each other just by using the service names that are laid out in Kubernetes. So, essentially, I can say that Kubernetes has made development easier.

Monitoring

And the last thing I want to touch on is monitoring. Kubernetes has a lot of built-in capabilities to allow you to see logs, see CPU load, all in their neat UI. But the fact is that there is sometimes more that you want to see with your application, and the open source community out there has developed a number of amazing tools to give you introspection into your running application.

The main one I’m thinking about right now is Istio—and although that’s a little bit more of an advanced topic, we will likely hit that in a future whiteboarding session.

Kubernetes vs. Docker: It’s not actually one or the other

So back to our main topic: using Kubernetes vs. Docker. It’s definitely not a choice of using one or the other. It’s one of those things where Kubernetes allows you to take advantage of your existing Docker workloads and run them at scale—tackle real complexities. (Note: For further information, check out “Kubernetes vs. Docker: Why not both?“)

Kubernetes is great to get started with, even if you’re making a small app, if you anticipate that one day you’ll have to move to scale. If you’re already taking advantage of Docker and containers with your applications, moving them onto Kubernetes can really help you tackle some of the operations overhead that almost every application is going to run into when moving to scale  .

Thank you for joining me today. I hope you find this useful, and definitely stay tuned for additional whiteboarding sessions in the future.

Was this article helpful?
YesNo

More from Cloud

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

Optimize observability with IBM Cloud Logs to help improve infrastructure and app performance

5 min read - There is a dilemma facing infrastructure and app performance—as workloads generate an expanding amount of observability data, it puts increased pressure on collection tool abilities to process it all. The resulting data stress becomes expensive to manage and makes it harder to obtain actionable insights from the data itself, making it harder to have fast, effective, and cost-efficient performance management. A recent IDC study found that 57% of large enterprises are either collecting too much or too little observability data.…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters