Archive

OpenStack guest CPU topology configuration: Part one

Share this post:

One of the new features of the Juno release of OpenStack is the ability to express a desired guest virtual machine (VM) CPU topology for libvirt managed VMs. In this context, CPU topology refers to the number of sockets, cores per socket and threads per core for a given guest VM.

Up until this point, when you requested an n vCPU VM through OpenStack, the resulting VM would be created with a CPU topology of n sockets, each with one single-thread core. For example, if I create a four-vCPU guest booted to a Fedora 20 Linux image on my POWER8 test system on Icehouse (also the default behavior for Juno), the CPU topology within the created VM would be four sockets, each with one single-threaded core.

[root@bare-precise ~]# lscpu
Architecture:			ppc64
CPU op-mode(s):			32-bit, 64-bit
Byte Order:				Big Endian
CPU(s):					4
On-line CPU(s) list:	0-3
Thread(s) per core:	1
Core(s) per socket:	1
Socket(s):				4
NUMA node(s):			1
Model:						IBM pSeries (emulated by qemu)
L1d cache:				64K
L1i cache:				32K
NUMA node0 CPU(s):		0-3

As you can see, the resulting CPU topology for the VM is four sockets each with one core and each core with one thread.

Why is this an issue?

First, some operating systems (I believe certain versions of Windows) are limited to a maximum number of sockets. So, for example, if your operating system is limited to only running on systems that have two sockets or less, you are limited to using only two vCPU VMs.

If you could instead specify one socket, four cores per socket, and one thread per core, OR one socket, two cores per socket, and two threads per core, you could create VMs able to use more of the physical compute resources on a system.

Second, on systems that have hardware threads, such as simultaneous multithreading (SMT) on my POWER8 test box, these compute resources aren’t being utilized. It would be more desirable to run a four-vCPU guest on a single core (utilizing four hardware threads) and make the most of the computing power of this system.

Given that my POWER8 System has a CPU topology of four sockets, four cores per socket and eight threads per core, a significant amount of compute resources are not being utilized! Additionally, if I’m able to run a VM on a single core, the VM should be less likely to experience penalties related to nonuniform memory access (NUMA) effects, if otherwise the VM would have been assigned cores in different NUMA nodes.

The new virt-driver-vcpu-topology blueprint implemented in the Juno release helps address this situation.

At a high level, this new feature allows you to configure both limits and preferences for sockets, cores and threads at a flavor level as well as at an image level. The extra_specs defined for a flavor are as follows:

hw:cpu_sockets=NN - preferred number of sockets to expose to the guest
hw:cpu_cores=NN - preferred number of cores to expose to the guest
hw:cpu_threads=NN - preferred number of threads to expose to the guest
hw:cpu_max_sockets=NN - maximum number of sockets to expose to the guest
hw:cpu_max_cores=NN - maximum number of cores to expose to the guest
hw:cpu_max_threads=NN - maximum number of threads to expose to the guest

And the extra_specs defined for an image are:

hw_cpu_sockets=NN - preferred number of sockets to expose to the guest
hw_cpu_cores=NN - preferred number of cores to expose to the guest
hw_cpu_threads=NN - preferred number of threads to expose to the guest
hw_cpu_max_sockets=NN - maximum number of sockets to expose to the guest
hw_cpu_max_cores=NN - maximum number of cores to expose to the guest
hw_cpu_max_threads=NN - maximum number of threads to expose to the guest

The way this is envisioned to be used is that an administrator can set default options at the flavor level, and a user or administrator can further refine or limit the topology at the image level. The “preferred” versions of each option must not exceed that of the “maximum” setting.

I hope this introduction has been helpful. In my next post, I’ll walk through some examples of using this new feature as I’ve done testing on my POWER8 PowerKVM system. Please leave a comment below to join the conversation.

More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, 16.0.0.4. It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading