Archive

OpenStack guest CPU topology configuration: Part two

Share this post:

In my previous post, I introduced the guest CPU topology configuration feature developed for the Juno release of OpenStack. As a reminder, the specification for this feature can be read here.

This feature allows administrators and users to specify the CPU topology configured for an OpenStack virtual machine (VM). Initially, this is targeted to libvirt and KVM hypervisors, but presumably could be supported on other OpenStack supported hypervisors over time.

I’ve backported these changes to Icehouse, as that is what our OpenStack PowerKVM continuous integration (CI) testing infrastructure is built around. There is a desire to take advantage of this feature in IBM PowerKVM CI to improve our hardware utilization. These backported changes are available at my github, but please only use at your own risk. As Juno is eminent (October 16, 2014 target date), it’s much preferable to just wait for the official release to try out this feature yourself.

A few examples

Let’s start by attempting to create a VM with four threads. First, let’s create a four-vCPU flavor named “test.vcpu4” to do our experimentation.

[root@p8-dev ~(keystone_admin)]# nova flavor-create test.vcpu4 110 8192 80 4
+-----+------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID  | Name       | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public
+-----+------------+-----------+------+-----------+------+-------+-------------+-----------+
| 110 | test.vcpu4 | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
+-----+------------+-----------+------+-----------+------+-------+-------------+-----------+

Next, let’s specify that we prefer four threads for this new flavor by configuring the ‘hw:cpu_threads’ option on our new test.vcpu4 flavor.

# nova flavor-key test.vcpu4 set hw:cpu_threads=4
# nova flavor-show test.vcpu4
+----------------------------+---------------------------+
| Property                   | Value                     |
+----------------------------+---------------------------+
| name                       | test.vcpu4                |
| ram                        | 8192                      |
| OS-FLV-DISABLED:disabled   | False                     |
| vcpus                      | 4                         |
| extra_specs                | {u'hw:cpu_threads': u'4'} |
| swap                       |                           |
| os-flavor-access:is_public | True                      |
| rxtx_factor                | 1.0                       |
| OS-FLV-EXT-DATA:ephemeral  | 0                         |
| disk                       | 80                        |
| id                         | 102                       |
+----------------------------+---------------------------+

Let’s boot a Fedora 20 Linux image with this new flavor.

# nova boot ¨Cimage jgrimm.f20 --flavor test.vcpu4 jgrimm-test

And let’s verify that the CPU topology is now one socket with one core of four threads.

[fedora@jgrimm-test ~]$ lscpu
Architecture:          ppc64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Big Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    4
Core(s) per socket:    1
Socket(s):             1
NUMA node(s):          1
Model:                 IBM pSeries (emulated by qemu)
L1d cache:             64K
L1i cache:             32K
NUMA node0 CPU(s):     0-3

As a second example, let’s override the flavor behavior and specify that we’d prefer two threads for our test image “jgrimm.f20.”

# nova image-meta jgrimm.f20 set hw_cpu_threads=2
# nova image-show jgrimm.f20
+-----------------------------+--------------------------------------+
| Property                    | Value                                |
+-----------------------------+--------------------------------------+
| status                      | ACTIVE                               |
| metadata extra_args         | console=hvc0 console=tty0            |
| updated                     | 2014-09-08T17:55:05Z                 |
| metadata arch               | ppc64                                |
| name                        | jgrimm.f20                           |
| created                     | 2014-09-08T16:12:18Z                 |
| minDisk                     | 0                                    |
| metadata hw_cpu_threads     | 2                                    |
| metadata hypervisor_type    | kvm                                  |
| progress                    | 100                                  |
| minRam                      | 0                                    |
| OS-EXT-IMG-SIZE:size        | 2350710784                           |
| id                          | bd43c8cb-0766-4a7c-a086-c96bc1c55ac2 |
+-----------------------------+--------------------------------------+

#nova boot --flavor test.vcpu4 --image jgrimm.f20 jgrimm-test2

The resulting “lscpu” output from our new guest is:

[root@jgrimm-test2 ~]# lscpu
Architecture:          ppc64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Big Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    2
Core(s) per socket:    1
Socket(s):             2
NUMA node(s):          1
Model:                 IBM pSeries (emulated by qemu)
L1d cache:             64K
L1i cache:             32K
NUMA node0 CPU(s):     0-3

Notice that in this example, our four-vCPU request has been satisfied by a topology with two sockets, each with one core of two threads. Why did two sockets get chosen over a configuration with two cores? What is happening is that Nova will choose to prioritize configuring sockets over cores and cores over threads.

This preference behavior can be a bit surprising at times. I’ll work through such a situation in my next post, and will provide some concluding thoughts and references.

Please leave a comment below to join the conversation.

(Related: OpenStack guest CPU topology configuration — Part three)

More stories

Why we added new map tools to Netcool

I had the opportunity to visit a number of telecommunications clients using IBM Netcool over the last year. We frequently discussed the benefits of have a geographically mapped view of topology. Not just because it was nice “eye candy” in the Network Operations Center (NOC), but because it gives an important geographically-based view of network […]

Continue reading

How to streamline continuous delivery through better auditing

IT managers, does this sound familiar? Just when everything is running smoothly, you encounter the release management process in place for upgrading business applications in the production environment. You get an error notification in one of the workflows running the release management process. It can be especially frustrating when the error is coming from the […]

Continue reading

Want to see the latest from WebSphere Liberty? Join our webcast

We just released the latest release of WebSphere Liberty, 16.0.0.4. It includes many new enhancements to its security, database management and overall performance. Interested in what’s new? Join our webcast on January 11, 2017. Why? Read on. I used to take time to reflect on the year behind me as the calendar year closed out, […]

Continue reading