Archive

Docker: Managing the excitement

If you’re playing around in the cloud space, it would be hard to escape the buzz around a relatively new piece of technology called Docker.

docker logo open cloudFor those who somehow don’t know about it, think of Docker as providing mini virtual machines (VMs). They allow you to take an existing machine (virtual or real) and group processes, files, resources, etc. into isolated spaces that give you the appearance of having your own machine without the overhead of actually creating a new one. Much in the same way VMs allow you to simulate multiple physical machines within a single physical host, Docker allows you to take this simulation one step further by squeezing more virtualized environments (called “containers”) into a single machine.

In the end, whether your code is running on a physical machine, a virtual machine, or a Docker container, you shouldn’t really know or care.  This is all about better resource management—squeezing more running code into existing hardware.

(Related: The key differentiators of Docker technology)

Is the buzz justified?

Under the covers, Docker uses well-known existing Linux container technologies, so in that respect this option has been around for a while. So why the buzz around Docker? I think the answer is actually pretty simple: It’s all about presentation.  Not trying to downplay all of the hard technical challenges that the Docker teams had to overcome, but to me they did something that no one else took the time to do: They made it simple to use.

The simplicity that Docker brings to the table comes from two key aspects. First is its public global repository of Docker images. People can download and upload snapshots of containers (called images) with others via this repository and it’s all done via simple commands. This means that with only a few keystrokes I can tell Docker to use a pre-existing Docker container as a starting point for my work. No hard and time-consuming install processes to deal with. This means that I can start to work on my “real job” of creating an app in seconds.

The second key thing that Docker did, which is closely related to the first, is it really did a great job of dumbing it down. I am, by far, not a Docker expert (and that’s the point, you don’t need to be to use it), but some of the key things that jumped out at me with respect to its simplicity are:

Docker comes as single executable. Who cares? Well, I personally get kind of tired of long, tedious (and often buggy) install processes that “easy” software packages claim to have. What’s easier than copying an exe and running it? This tells me that they really cared about the user experience from the outset.

Once installed, I can immediately create a new container with one command. For example, > docker run –i –t ubuntu bash creates a new container (running Ubuntu) and puts me into a bash shell. And it takes seconds (if not less than a second, after the first time since things are cached) for it to complete. From there I have a new container to play with.

Developing my own images (meaning saved Docker containers with my app and all of its prerequisites installed) is trivial and intuitive. Imagine what you would do if someone gave you a new machine and said, “Install ‘X’ on here and make sure you can do it in a repeatable fashion.” Chances are you’d create a script that walks through all of the steps necessary to download and install everything. That’s exactly what Docker asks you to do too. Plus, you can save all those instructions (in a “Dockerfile”) so that Docker can run them again for you when you ask, or you can save the output of those steps as an “image,” which can be used to prepopulate a new container later. The choice is yours, but both are trivial and intuitive.

There are many (and I mean many) more features of Docker that you’ll want to explore, but within minutes of installing it just doing the above should give you a sense that the Docker folks did something really special.

As I mentioned above, Linux container technology isn’t new, but in my opinion none of the packages that use or expose them really tried to bring it to the masses in such an easily consumable way that Docker did, and sometimes that makes all the difference.

What does this really mean?

Once you get past the novelty of Docker and playing around with the many exciting features it offers, you might be wondering about how this fits in with all of the other cloud activities.

Is Docker an IaaS or a PaaS? For example, is Docker just a mini-VM as I described it above? Meaning, is it more of an infrastructure-as-a-service thing? Or, since you actually deploy running code via Docker into those container is it more of a platform-as-a-service thing?

In my opinion, it’s both. We’re already seeing activity in projects like OpenStack to allow you to spin up Docker containers when you might normally create a new VM. Likewise, since people are deploying apps via Docker images, and there are many projects out there to manage those containers, you can consider Docker as a pseudo-PaaS platform. I say “pseudo” because normally PaaS platforms try to do a lot of the infrastructure management for you (like load-balancing, auto-scaling, etc.), and that’s not there in Docker—at least not yet.

One of the big selling points of a PaaS platform is that it’s supposed to handle a lot of the underlying technology for you.  If you know Cloud Foundry then you know that you’re supposed to give it your web app and it’ll find the appropriate runtime (language) and the appropriate framework (app server), install them both and then deploy your app into it.  That’s a ton of work removed from the developer’s plate, and its great. Now, think about what Docker does.  It makes you go back in time so you have to do all of those steps we were told were too tedious and annoying. Granted, I’m sure the true geeks among us don’t care, but to me it flies in the face of what a PaaS is supposed to be. And this is why I sometimes consider Docker to be more of a mini-IaaS platform.

On the flip side though, in reality not everything fits into a nice web app model and sometimes you really do need to work at the VM level. This is where I think Docker really shines. To me, the real killer platform will be the one that allows you do deploy an app that spans the as-a-service layers seamlessly. Openstack’s Heat may be the answer here, and I would love for CloudFoundry to employ this because they have all of the pieces necessary to deploy VMs, generic Docker images and web apps. They just need to pull it all together before someone else does. But I digress . . .

So, how does Docker relate to a PaaS offering like Cloud Foundry? This will be interesting to watch as time progresses. Right now, I think there’s some pleasantries between the two communities being expressed that mask the truth. CF is in the process of adding support for running Docker images as normal CF apps.  But at the same time when you listen to people talk about Docker, many of them describe it more like a PaaS, meaning it’s the platform into which you deploy your app.  So, why would you need CF?  Well, because CF gives you the stuff I mentioned above, like load balancing. However, does anyone really think that Docker won’t support those same features soon (if not part of Docker itself, as part of some very closely related tooling)?

Additionally, CF is very focused on just web-based apps, while Docker supports pretty much any kind of app.  So, once Docker does support those missing PaaS features it’ll put some pressure on CF if they continue to limit themselves to just web apps. Should be interesting to see how that shakes out.

What about Docker Managers? There’s a flurry of activity around tools to help you manage Docker and/or the applications that run in Docker—ones like OpenStack’s Container Service, Mesos, Kubernetes, Panamax just to name the ones that come to mind.  Each appears to want to help people manage Docker and each appears to look at the problem slightly differently. I think it’s too soon to pick a winner, so we’ll just need to see how it goes, but I go back to what I said above, looking at it from just one layer of the as-a-service stack is probably a mistake.

Anyway, I encourage you to go play with Docker. It really is a cool piece of technology, but watch out for the hype that surrounds it and in particular the tooling industry being built around it. One of Docker’s key selling points is portability (their images can run just about anywhere), so make sure you don’t lose that by tying yourself to tooling that locks you into one platform.

Share this post:

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Archive Stories

Job roles to consider when moving to the cloud

Moving to the cloud is more than a simple technical undertaking. Business impacts around governance, change management and day-to-day operations must all be considered, and this can sometimes be a challenge. To partly address this issue, IBM introduced the Cloud Computing Reference Architecture (CCRA) years ago. Among the various details about work products and other […]

Continue reading

Three ways the cloud will mess with your business processes

In our last installment, I wore my system administrator hat and tried to explain why system administrators might be resistant to the types of change that infrastructure as a service can cause. As an equal opportunity annoyer, I will pick on the business managers today.

Continue reading

The IBM cloud offering decoder

Some cloud offerings actually existed before the term “cloud” came into use. These products ended up getting repackaged or renamed to fit in with the cloud portfolio. As new offerings emerged, many of them were shuffled around and renamed until more of the cloud puzzle pieces began to fall into place.

Continue reading