KVM (Kernel-based Virtual Machine) is technically excellent as a hypervisor across the board. The performance, scalability and efficiency, device support, ability to run different types of guests, and hardware support are all first-rate – and it is also integrated with Linux. At this point, the upstream development focus for KVM is on two fronts: first, to exploit new hardware (and that is not just an x86 proposition any longer) and second to make KVM easier to use and smart enough to take care of itself so that it requires less attention from the user to get the best performance. Here are five important upstream KVM features coming in future enterprise distributions that will help make that happen:
- VFIO –Virtual Function Input/ Output: This is a Linux technology that makes it easier for users and vendors to provide native device support in KVM guests, which is important for performance reasons.
- virtio – dataplane: This is a new block I/O (input/output) infrastructure that allows the KVM guest to do block I/O directly with block device support in the host Linux kernel. This is the technology that was behind the IOPS benchmarks that we published with Red Hat in the spring of 2013. The performance was about 30% better than any other hypervisor has been able to achieve in a guest for block I/O performance.
- ivshhmem – Nahanni shared memory transport: This is a shared memory device and host kernel driver. It is split between the host Linux kernel and the QEMU (Quick EMUlator) virtual machine environment. It provides a number of different ways for guests to use fast host memory as a communications medium for messaging. You can take different servers, consolidate them on a single KVM host and use the shared memory as a transport for HPC applications instead of a high performance network: with resulting application performance that is at least as good if not better because the applications are exchanging messages over in-host memory instead of an inter-host network.
- RDMA – Remote Direct Memory Access: It is easier to access RDMA functionality from within a KVM guest now than ever before, and that is partially due to the VFIO infrastructure. In combination with that, there is a memory transport being worked on upstream in QEMU to do live guest migration over RDMA. This is a big feature for high performance database managers. It will pave the way for more high performance database applications that utilize a lot of block I/O or page-related I/O over RDMA devices today.
- Gluster FS – Integration, new translators: Gluster FS provides a general framework for clustered file system and block I/O infrastructure but the actual work is done by “translators.” If you want a specific type of clustered file system, you may write a Gluster translator for it. There are two developments here. Gluster FS is now integrated into QEMU 1.4 and later versions, which means that you have access to it automatically. And, new translators give you specific features with specific types of storage devices. This means we have an integrated file system for KVM that works well with block devices, which has been a missing feature from KVM for quite a while. (We have had all the separate shared file system features but they haven’t been integrated.)
You can expect these new capabilities to be introduced into Enterprise Linux distributions sometime around the end of 2013 since it generally takes about 6 months for an upstream feature to get into an enterprise distribution. These are high-end features that go well beyond what can be done now with commercial hypervisors. And, importantly, they will be easy to use.
Mike Day - IBM Distinguished Engineer and Chief Virtualization Architect, Open Systems