December 5, 2014 | Written by: Staff Writer
Share this post:
Recently, technologies like Docker have popularized software containers as basic computational units for application deployment. In addition to easing the process of application development, testing, deployment and maintenance, these technologies also promise to be lighter-weight alternatives to full-fledged virtualization.
In most environments, containers will coexist with other computational units like virtual machines (VMs) and bare metal computers, either due to legacy or supportability reasons, or due to the fact that a particular form factor is best suited for a given workload. As such, it is quite beneficial to be able to unify the storage and network infrastructure independent of the computational unit.
In this post, we’ll take a closer look at the networking aspects. Our goal is to provide a common virtual networking infrastructure for containers, VMs and bare metal workloads. We believe a unified virtual network is needed to support a mixed computational environment so that the application workloads can transparently reach each other and the infrastructure services that they need.
We’ll focus on software-defined networking (SDN) for containers using a network virtualization framework that already works for virtualized and non-virtualized (bare metal) workloads.
Networking for containers
Networking solutions that are currently being used for containers are reminiscent of the state of the early days of x86 virtualization. Currently networking for containers is either too restricted or specialized or too slow. Over the years, networking for virtual machines has evolved tremendously, and now hypervisors routinely support advanced networking capabilities, including technologies like network virtualization using overlays. We think that networking for containers can benefit from the advances in network virtualization.
We believe that a unified networking solution that allows the same networking solution independent of the computational unit in use is the best way to go forward.
Case study: Docker networking
Docker containers are hosted in a Linux environment. You can choose from numerous Linux distributions that provide support for their Docker work. Here is a list of platforms you can run Docker on.
We will look at Docker container networking from the point of view of:
• Container-to-container communication using virtual bridge inside the host
• Container to outside world
Note that Docker provides additional modes of container-to-container communication within a host; we will not cover these modes in this post.
By default, each container’s virtual network interface is given a private IP address defined in RFC1918 that is not in use within the server on which Docker is hosted.
Also by default, Docker daemon creates a virtual ethernet bridge called docker0, to which all Docker containers in a server are attached. Communication between containers in the same server is accomplished with docker0 bridge, since the bridge forwards traffic to all the interfaces attached to it.
When a new container is created, Docker creates two virtual interfaces. One interface, typically called eth0, is attached to the container and assigned an IP address from the IP subnet range reserved by Docker when it was started. The default route attached to the eth0 interface points to the IP address that the Docker host owns on the docker0 bridge.
The other interface uses the namespace of the host; it typically has a name of the type veth9a7d and is bound to the docker0 bridge. Docker creates a logical link between the two interfaces. This allows the newly created container to communicate with other containers in the same host and the host itself.
In addition to creating two interfaces for the container, Docker also modifies the host’s iptables so that outbound communication is achieved with IP network access translation (NAT), which modifies iptables to allow outbound traffic and NATs container IP addresses to docker0 IP address.
You can find the details of Docker networking here.
Two main characteristics of Docker networking prompted us to investigate IBM Software Defined Networking for Virtual Environments (IBM SDN VE) for Docker:
• Docker uses IP NAT to facilitate container communication outside of the host. This prevents one from deploying any service in the container that needs to be directly reachable by outside clients.
• Containers from different hosts cannot be in the same network. This restricts users from creating commonly used network topologies with containers.
In addition to the above two restrictions, in general it would be beneficial to provide a generic virtual network infrastructure so that workloads that are currently hosted in VMs or on bare metal can be easily deployed in containers.
A simple networking solution for Docker containers using SDN VE can provide the following benefits:
• Multiple containers within the same server can participate in the same or different virtual networks.
• Virtual networks can span across all the servers that are containerized.
• Containers from different servers can be put into the same virtual network.
• Containers can be accessed from outside networks.
• Containers can access data center resources that are not hosted in a container.
IBM Software Defined Networking for Virtual Environments
IBM SDN VE is a general purpose VXLAN based overlay solution that provides network virtualization for virtual machines. Currently IBM SDN VE is supported for VMware ESX and Linux KVM hypervisors. IBM SDN VE KVM Edition uses the Linux Bridge and VXLAN module to provide virtual networking to VMs hosted in Linux KVM environment.
IBM has contributed the IBM SDN VE KVM Edition based network virtualization solution called OpenDOVE to OpenDayLight community. OpenDayLight is an open source SDN Controller. A diagram of a high-level view of SDN VE architecture is as shown below:
IBM SDN VE is a multi-tenant virtual networking solution that provides the following features:
• A virtual network that can span the entire data center.
• Any VM may be put in any virtual network.
• VMs hosted in different servers may be placed in the same virtual network.
• Each server may participate in multiple virtual networks.
• VXLAN encapsulation can be used for virtual network IP address isolation from the physical network IP address.
• VMs can migrate across physical Layer 2 boundaries.
• Virtual gateways can be used to facilitate bi-directional communication to external IP networks such as the Internet and bi-directional access to services such as DNS or LDAP in the data center.
• A service chaining infrastructure allows network services such as firewalls and load balancers to be inserted between network tiers.
The solution also includes a secure management and control plane.
Our goal is to provide all of the above features for Docker containers delivered in a cloud operational model that is large scale, self-provisioning and automated.
IBM SDN VE can provide consistent virtual networking infrastructure for both containers and virtual machines and bare metal servers running Linux OS.
Here is a high-level architectural view of how SDN VE will provide virtual networking for Docker containers:
And this provides a high-level view of the architecture for unified virtual networking for containers, VMs and bare metal servers:
We are currently working on extending IBM SDN VE to support containers to provide a unified, consistent network model for a variety of computational units. If you are interested in a beta trial or in discussing IBM SND VE for containers in further detail, please comment below to start the conversation.