The ability to run multiple virtual machines (VMs) on single server hardware platforms
provides cost, system management, and flexibility advantages in IT infrastructure today.
Hosting multiple VMs on single hardware platforms reduces hardware expenses and helps
minimize infrastructure costs such as power consumption and cooling. Consolidating
operationally distinct systems as VMs on single hardware platforms simplifies managing
those systems through administrative layers such as the open source virtualization
libvirt) and tools that are based on it, such as the
graphical Virtual Machine Manager (VMM). Virtualization also provides the operational
flexibility required in today's service-oriented, high-availability IT operations by making
it possible to migrate running VMs from one physical host to another when mandated by
hardware or physical plant problems or to maximize performance through load balancing
or in response to increasing processor and memory requirements.
Open source desktop virtualization applications such as VirtualBox make it possible for users and even some small enterprise (small to medium-sized business or small to medium-sized enterprise) environments to run multiple VMs on single physical systems. However, virtualization environments such as VirtualBox run as client applications on desktop or server systems. Enterprise computing environments require higher-performance, server-oriented virtualization environments that are closer to the physical hardware (the "bare metal"), enabling VMs to execute with far less operating system overhead. Bare-metal virtualization mechanisms can better manage hardware resources and can also best take advantage of the hardware support for virtualization that is built into most 64-bit x86 and PowerPC processors.
Bare-metal virtualization mechanisms use a small operating system, known as a hypervisor, to manage and schedule VMs and associated resources. Bare-metal hypervisors are known as Type 1 hypervisors (see Resources for a link to more general information on hypervisors). The two most popular bare-metal open source virtualization technologies are Kernel Virtual Machine (KVM) and Xen. Although both Xen and KVM have their advantages and devotees, KVM has been growing in popularity and sophistication, until it is now the default virtualization mechanism recommended for use with most Linux® distributions.
Comparing KVM and Xen
The Xen virtualization environment (see Resources for a link) has traditionally provided the highest-performance open source virtualization technology on Linux systems. Xen uses a hypervisor to manage VMs and associated resources and also supports paravirtualization, which can provide higher performance in VMs that are "aware" that they are virtualized. Xen provides an open source hypervisor that is dedicated to resource and virtual management and scheduling. When booted on the bare-metal physical hardware, the Xen hypervisor starts a primary VM, known as Domain0 or the management domain, that provides central VM management capabilities for all other VMs (known as Domain1 through DomainN, or simply as Xen guests) that are running on that physical host.
Unlike Xen, KVM virtualization uses the Linux kernel as its hypervisor. Support for KVM virtualization has been a default part of the mainline Linux kernel since the 2.6.20 release. Using the Linux kernel as a hypervisor is a primary point of criticism regarding KVM, because (by default) the Linux kernel does not meet the traditional definition of a Type 1 hypervisor—"a small operating system." Although this is true of the default kernels delivered with most Linux distributions, the Linux kernel can easily be configured to reduce its compiled size so that it delivers only the capabilities and drivers required for it to operate as a Type 1 hypervisor. Red Hat's own Enterprise Virtualization offering relies on just such a specially configured, relatively lightweight Linux kernel. However, more significantly, "small" is a relative term, and today's 64-bit servers with many gigabytes of memory can easily afford the few megabytes that modern Linux kernels require.
Several reasons exist for the ascendancy of KVM over Xen as the open source bare-metal virtualization technology of choice for most enterprise environments:
- KVM support has been automatically present in every Linux kernel since the 2.6.20 release. Prior to Linux kernel release 3.0, integrating Xen support into the Linux kernel required applying substantial patches, which still did not guarantee that every driver for every possible hardware device would work correctly in the Xen environment.
- The kernel source code patches required for Xen support were only provided for specific kernel versions, which prevented Xen virtualization environments from taking advantage of new drivers, subsystems, and kernel fixes and enhancements that were only available in other kernel versions. KVM's integration into the Linux kernel enabled it to automatically take advantage of any improvements in new Linux kernel versions.
- Xen requires that a specially configured Linux kernel be running on the physical VM server to serve as the administrative domain for all Xen VMs that are running on that server. KVM can use the same kernel on the physical server as is used in Linux VMs that are running on that physical system.
- Xen's hypervisor is a separate, stand-alone piece of source code, with its own potential defects that are independent of defects in an operating system that it hosts. Because KVM is an integrated part of the Linux kernel, only kernel defects can affect its use as a KVM hypervisor.
Although Xen can still offer higher-performance bare-metal virtualization than KVM, the value of those performance improvements is often outweighed by the simplicity and ease of use of KVM virtualization.
Common administrative tools for KVM and Xen
As a more mature bare-metal virtualization technology than KVM, Xen provides its own
set of specialized administrative commands, most notably the
command-line suite. Like any technology-specific set of administrative commands, the
xm tools have their own learning curve and are not known
to all Linux systems administrators. KVM inherited much of its initial administrative
infrastructure from QEMU, a well-established Linux emulation and virtualization
package, which has an equivalent learning curve and is also somewhat specialized
Although it's natural for any unique technology to have its own command set, the
increasing number of virtualization technologies led Linux vendors to begin looking
for one administrative interface to rule them all. Red Hat is not the first billion-dollar
open source company for no reason, and it has led the effort to develop the
libvirt virtualization application programming interface
(API) to support the development of tools that can be used to manage and administer
multiple virtualization technologies. The
supports such virtualization technologies as KVM, Xen, LXC containers, OpenVZ,
User-mode Linux, VirtualBox, Microsoft® Hyper-V®, and many VMware
Rather than betting on a single technology and associated command set, focusing on
libvirt enables systems administrators to learn a single
set of command-line and graphical commands that depend on that API and continue
to use those tools regardless of changes in underlying virtualization technologies.
Similarly, virtualization tools vendors can reap the same benefits by using the
libvirt API directly.
The next few sections provide common ways in which
tools simplify common administrative tasks for KVM-based virtualization sites. These
sections focus on command-line examples using the
virt-install commands, although all of these tasks
can be performed in the graphical,
virt-manager), as well. All of these commands must be
executed as the root user (or via the
The examples in the rest of this article assume that you have installed the appropriate packages for your Linux distribution to support KVM virtualization and provide the necessary tools. The required packages differ based on your Linux platform. For example, on Red Hat Enterprise Linux (RHEL) systems (or RHEL clones), you will need to have installed the Virtualization, Virtualization Client, Virtualization Platform, and Virtualization Tools package groups.
Using storage pools
The developerWorks article on creating a simple KVM VM (see Resources for a link) explains how to install a VM using a disk image that you create in disk storage that is local to the server that supports that VM. Manually creating each local file that serves as a disk image is a common way of initially experimenting with VMs but quickly becomes time-consuming, tedious, and difficult to manage.
libvirt provide a convenient abstraction for the location
of VM images and file systems, known as storage pools. A
storage pool is a local directory, local storage device (physical
disk, logical volume, or SCSI host bus adapter [HBA] storage), network file
system (NFS), or block-level networked storage that
manages and in which you can create and store one or more VM images. Local
storage is simple but can be inflexible and doesn't support the most critical
requirement for enterprise virtualization: the ability to migrate VMs from one
server to another while the VMs are running, which is known as live
migration. To easily support live migration, the VM disk image should
be located in an NFS, block-level networked storage, or in HBA storage that is
available from multiple VM hosts.
The example in this section uses the
which is a
libvirt-based command suite that provides
individual subcommands for creating and managing all of the objects that
libvirt uses—VMs (domains), storage volumes,
storage pools, networks, network interfaces, devices, and so on.
libvirt-based commands use the directory
/var/lib/libvirt/images on a virtualization host as an initial file system directory
storage pool. You can easily create a new storage pool by using the
virsh pool-create-as command. For example, the
following command shows the mandatory parameters that you must specify when
creating an NFS-based (
netfs) storage pool:
virsh pool-create-as NFS-POOL netfs \ --source-host 192.168.6.238 \ --source-path /DATA/POOL \ --target /var/lib/libvirt/images/NFS-POOL
The first argument (
NFS-POOL) identifies the name of the
new storage pool, while the second argument identifies the type of storage pool
that you are creating. The argument to the
option identifies the host that is exporting the storage pool directory via NFS. The
argument to the
--source-path option specifies the
name of the exported directory on that host. The argument to the
--target option identifies the local mount point that
will be used to access the storage pool.
After you have created a new storage pool, it will be listed in the output of the
virsh pool-list command. The following example shows
the default storage pool and the
NFS-POOL pool that was
created in the previous example:
virsh pool-list --all --details Name State Autostart Persistent Capacity Allocation Available ---------------------------------------------------------------------------- default running yes yes 54.89 GB 47.38 GB 7.51 GB NFS-POOL running no no 915.42 GB 522.64 GB 392.78 GB
In this example output, note that the new storage pool is marked as not being autostarted,
meaning that it will not automatically be available for use after a system restart, and
that it is also not persistent, which means that it will not be defined at all after a
system restart. Storage pools are only persistent if they are backed by an XML description
of the storage pool, located in the directory /etc/libvirt/storage. XML storage pool
description files have the same name as the storage pool with which they are associated
and have the
.xml file extension.
To create an XML description file for a manually defined storage pool, use the
virsh pool-dumpxml command, specifying the name of the
pool that you want to dump an XML description of as a final argument. This command
writes to standard output, so you'll need to redirect its output into the appropriate
file. For example, the following commands would create the right XML description
file for the
NFS-POOL storage pool that was created
cd /etc/libvirt/storage virsh pool-dumpxml NFS-POOL > NFS-POOL.xml
Even after making a storage pool persistent, a pool will not be marked to start
automatically when the virtualization host is restarted. You can use the
virsh pool-autostart command followed by the name of
a storage pool to set a storage pool to autostart, as shown in the following example:
virsh pool-autostart NFS-POOL Pool NFS-POOL marked as autostarted
Marking a storage pool as autostarted means that the storage pool will be available whenever the virtualization host is restarted. Technically, it means that the /etc/libvirt/storage/autostart directory contains a symbolic link to the XML description of that storage pool.
After you have created a storage pool, you can create one or more VMs in that pool, as discussed in the next section.
Creating a VM
The example in this section leverages the storage pool that you created in the previous
section but uses the
virt-install command, which is a
libvirt-based command that, as the name suggests, is
designed to help create VMs from the command line.
The following example
virt-install command creates a
hardware VM named RHEL-6.3-LAMP, whose name indicates that this VM is
running RHEL 6.3 and is being used as a standard Linux web server. By default, the
name of your VM is used when creating new disk pool volumes, so you should choose
this name carefully. VM names usually follow a local naming convention and should
be designed to make it easy for your fellow administrators to identify the type and
purpose of each VM.
virt-install --name RHEL-6.3-LAMP \ --os-type=linux \ --os-variant=rhel6 \ --cdrom /mnt/ISO/rhel63-server-x86_64.iso \ --graphics vnc\ --disk pool=NFS-01,format=raw,size=20 \ --ram 2048 \ --vcpus=2 \ --network bridge=br0 \ --hvm \ --virt-type=kvm \
Other options to the
virt-install command indicate that this
VM will be optimized for the Linux and RHEL6 Linux distributions (
--osvariant, respectively) and will be installed using
the ISO image /mnt/ISO/rhel63-server-x86_64.iso as a virtual CD-ROM device
--cdrom). When booting from the virtual CD-ROM drive,
virt-install command creates and attempts to display
a graphical console (
--graphics) using the Virtual Network
Computing (VNC) protocol, in which the boot and subsequent installation processes
are executed. How you connect to this console depends on how you are connected
to the virtualization server, whether it has graphical capabilities, and so on, and is
therefore outside the scope of this article.
The arguments to the
--disk option specify that the VM will
be created in 20GB of storage that is automatically allocated from the storage pool
NFS-POOL, which was created in the previous section.
The disk image for this VM will be created in the
image format, which is a simple disk image format that is highly portable across
most virtualization and emulation technologies. (See the link to the
libvirt Storage Management page in
Resources for information about other supported image
Moving through the other arguments to the
command, the new VM is initially configured with 2GB of memory
--ram) and two virtual CPUs (
and it accesses the network through the network bridge br0
--network). See the link to the developerWorks article,
"Creating a simple KVM virtual machine," in Resources for
information about creating a network bridge.
The last two options to the
virt-install command optimize
the VM for use as a fully virtualized system (
denote that KVM is the underlying hypervisor (
that will support the new VM. Both of these enable certain optimizations during the
creation and operating system installation process and are actually the default values
if these options are not specified. Explicitly specifying these options is good practice
if you are keeping command logs for your VM installations, because doing so preserves
information in your log about the virtualization environment for each VM.
You could use a similar command to create a VM that runs another operating system by
using an appropriate name for the VM and changing the arguments to the
--os-variant options appropriately.
Linux-based open source virtualization technologies are continually under development.
The ease of use and ongoing development of KVM have helped it displace the
potentially more powerful Xen virtualization technology as the standard for open
source Linux virtualization. Regardless of the virtualization technology you choose,
this evolution highlights the value of using standard, technology-independent
administration commands such as those that the
virtualization API provides.
This article provided examples of how to use
commands to simplify allocating storage for VMs and installing them in that storage
but just scratched the surface of the many powerful administrative capabilities that
libvirt API and freely available commands based on it
- Virtual Machine Manager is the central site for
information about VMM and related hypervisor-agnostic tools such as
- Hypervisors, virtualization, and the cloud: Dive into the KVM hypervisor (Bhanu P Tholeti, developerWorks, September 2011) provides a good introduction to KVM and its underlying architecture.
- Create a KVM-based virtual server (Da Shuang He, developerWorks, January 2010) provides a step-by-step guide to creating a simple virtual server in local disk storage and also explains how to create a bridged network to simplify inbound and outbound access to that VM.
- Track KVM guests with libvirt and the Linux audit subsystem (Marcelo H. Cerri, developerWorks, June 2012) provides a great technique for using the existing Linux audit mechanism to track and monitor KVM-based VMs.
- Manage resources on overcommitted KVM hosts (Adam Litke, developerWorks, February 2011) provides a detailed discussion of maximizing VM workloads by overcommitting physical resources across VMs.
Check out the
libvirtStorage Management page for more information about supported image formats.
- In the Virtualization blogs on developerWorks, get up-to-date commentary and insights on virtualization technologies, including KVM.
- Follow IBM Virtualization (@IBMvirt) on Twitter.
- The Open Source developerWorks zone provides a wealth of information on open source tools and using open source technologies.
- Browse the technology bookstore for books on these and other technical topics.
- Follow developerWorks on Twitter. You can also follow this author on Twitter at M. Tim Jones.
- Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners to advanced functionality for experienced developers.
Get products and technologies
libvirtVirtualization API website provides detailed information about the API, the virtualization abstractions that it supports, and the XML format that it uses.
- The KVM home page provides detailed information about KVM and links to a tremendous variety of information sources and related sites.
- The Xen home page provides general information about the Xen hypervisor and associated technologies.
- The VirtualBox home page provides links to documentation and downloads for the latest versions of the VirtualBox software for home and small enterprise-level virtualization.
- Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, use a product in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement service-oriented architecture efficiently.
- Get involved in the My developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.