Over the years I worked with many different versions or systems. Old ZX Spectrums, BBC Micros, Various PCs from 386, 486 all the way to modern deskops and laptops. Along with a whole solution of UNIX Hardware from my first Sequent System running Dynix/ptx, IBM PowerPC and P-Series to the recent IBM Power systems. Plus hardware using HP-UX, Solaris and a whole range or Linux flavors, Redhat, SLES, Utbuntu, to name but a few. Within those years we have moved from dedicated hardware for each system, to the virtual frame work that we have now. It in-turn mirroring something that has been done on Mainframes for years. But that revolution continues in the virtualization stakes with each manufacture trying to get the best market share for there solutions, and within those there is a lot of micro virtualization going on. Its not in its self a completely new things, AIX for example has had WPARs enabling you to run multiple servers within the hardware, and Solaris used a system of Zones. The big thing at the moment, is containers, or in case of my own experience Docker. Having worked with it for a while and with a number of the virulization standard above I do get asked the difference and its not always the easiest thing to explain. After all, not everyone understands all the concepts. I hope here to explain it all with a mixture to research, images and my own understanding, so please do feed back if I have made any mistakes.
Docker How it works:
Originally Docker was using LinuX Containers (LXC), but switched to runC (formerly known as libcontainer) using the OCP specification. It runs in the same operating system as its host. It then can share the host operating system resources, layered filesystems and manage the networking.
Docker uses a layered file system, so you can have a read only part, and a write part, and merge those together. This means common areas of the operating system are set as read only, then shared amongst all of your containers, then each container has its own mount for writing.
If you have a image that is 10GB in size, and need 10 servers to run your applications on with a full VM you'd need a full 100GB of storage. Using runC and a layered FS you share the majority of the OS data in the host container and distribute out 10 writeable client layers. Thus you could depending on your application, only be using a little over your base 10GB of storage. When your VMs run into thousands, you can easily see the savings.
Virtulized verse Containers:
Fully virtualized host gets its own set of resources allocated, with minimal sharing, more isolation, all this comes with a heavier footprint.
With the containers you get less isolation, but they are massively lightweight and need less resources. This means a system that could only run maybe 10 Virtual Machines, can now run thousands of containers without even noticing.
A VM usually takes a number of minutes to start, containers take just seconds, and sometimes even less.
As you can see, there are advantages and disadvantages for each type. So its a call based on the requirements at the time, but if you need to spawn hundreds or thousands clients quickly, say for a testing phase, then containers are the way to go.
Keeping your environments consistent can be at times a difficult task, multiple change waves across hardware and OS types mean that as soon as you have completed one wave another is almost due. Docker gives you the ability to snapshot the OS into a common image, makes it easier to deploy on other docker hosts, so everywhere can use the same image. Within those images you can snapshot application deployment, meaning that a host deployment is simply just a matter of selecting the images you need and running a few commands. Even systems using the same software can share those resources, so two containers using HTTP Server will share the base code. Though further applications on them might be different, and make use of their own writeable filesystem, software and code.
Containers have low overheads and better performance compared to VMs. While the Docker host can run natively on the server hardware, you can of course run these within VMs of there own. This gives you even more control of a environment and the users more flexibility. For a project that rolled out multiple evolutions of software/changes on many virtual machines can now enclose this within a Docker host, using many spawning containers. I know from experience, if I break my Docker container porting some code, rather than needed a VM rebuilt to get a clean host, I just terminated the container and spawned a new one. In fact if I want to test a change I could capture the container at that point, perform the work and either run with that new version or just abandon it. It was massively liberating.
You can run Docker on anything from the cheapest hardware (Raspberry pie), your laptop (very useful for remote work), to Midrange systems and now Mainframes. There a multiple choices of Operating Systems that you can use, RHEL, Oracle Linux SLES, Ubuntu, Mac OS X, Windows, and even in the Cloud, just follow this >link< for more info.
Within those Docker host you can run multiple versions, with matching Kernels. For example I ran a Docker (at the time 1.6) host on a zLinux VM using SLES12 creating multiple containers using Redhat 6 and 7, SLES 11 and 12, with no problems.
Above is a few examples of Docker hardware deployment:
- Type 2 Hypervisor, VMware Workstation for example.
- Type 2 with VM Guest Docker.
- Native host Docker.
- Type 1 Hypervisor, native IBM Z, VMware ESX and others.
- Type 1 with VM Guest Docker.
Now I've created the image above as a few examples of running Docker, based on the hardware it is on. Please let me know if there are any mistakes.