The performance improvements in upstream KVM development that are likely to make it into the next Linux releases from major distributors will be especially beneficial to enterprise KVM users. IBM's contributions to the KVM hypervisor are consistent with its longstanding commitment to Linux and represent a broad strategy of providing customer choice, in order to bring open technology to key segments of the technology market, and to enable IBM platforms, middleware, and services to have the best hypervisor technology available. Learn more about the IBM KVM commitment here.
Here’s a look at three notable performance improvements coming in future enterprise Linux releases.
More Efficient Virtualized Storage - Many of us are anticipating the re-worked VirtIO block driver, which increases performance of paravirtual block storage. Part of the rework includes hosting the hypervisor portion of the block driver in the Linux kernel. This approach has already proven successful with the paravirtual networking driver.
There is also further work on VirtFS which is a kernel component and a guest component. It is a file system in the kernel and a corresponding file system in the guest. It uses the Linux host file system cache and makes that cache available to the guest and it allows the guests on the host to share the cache so that when the guests are reading from the same files they get the benefit of that performance - plus it uses less memory on the host. That feature has been under development for a while and it is almost complete. It is getting to the point where it is going to be part of the next enterprise Linux releases.
Improved Memory Access Speed - And then a second performance improvement is called AutoNUMA (automatic non-uniform memory access). Every computer these days is a multicore computer. Every desktop, laptop, and server is a multicore computer - even an iPad is a multicore computer and this means that many of these have non-uniform memory access. (They have memory zones where one bank of memory is attached to one processor and another bank of memory is attached to another processor.)
When one processor accesses memory that is attached to the other processor, it is usually slower than if it accesses its own (local) memory. Therefore, when a guest is running on one processor you want all of that guest’s memory to be on that local memory node and AutoNUMA will either migrate the guest to the processor where most of that guest’s memory is, or it will migrate the memory to the processor where the guest is running automatically.
What we are doing now is manually pinning or manually constraining the guest and its memory to a specific processor and memory bank and the performance improvements are significant when you do that. When we run benchmarks we always do that manual pinning and it is very effective, but AutoNUMA will be even better because you won’t need to do anything at all. It will just tune itself.
Better Small Packet Performance - Multi-queue support for VirtIO networking will allow more concurrency and reduced latency in paravirtual networking. It specifically increases performance for small packets - which has been a weakness.
Beyond performance, additional KVM enhancements are also on the way in areas such as security, ease of use, disaster recovery, and high availability.
IBM Distinguished Engineer & Chief Virtualization Architect, Open Systems Development Software Architect