Contents


vCPU hotplug and hotunplug using libvirt v2

Comments

Using the hotplug operation, resources such as processor and memory can be dynamically added or removed from a guest operating system.

This article talks about how to perform a virtual processor (vCPU) hotplug/hotunplug operation using libvirt version 2 in a PPC64LE environment.

Prerequisites

The following packages need to be installed at the prescribed level of the operating system (OS).

On the host OS:

  • QEMU version 2.7 or later
  • libvirt version 2.0.0 or later

On the guest OS:

  • powerpc-utils version 1.2.26 or later
  • ppc64_diag version 2.6.8 or later
  • librtas version 1.3.9 or later

Guest XML changes

Modify the guest XML using the following command:

virsh edit <domainname>

And, add the following line of code.

<vcpu placement='static' current='4'>8</vcpu>

For more details about the VCPU XML tags, refer to CPU allocation.

With the above stated vCPU XML tags, the maximum number of vCPUs defined is 8. The guest operating system boots with four current vCPUs and has the capability to add four vCPUs dynamically using the hotplug operation.

After adding the vCPU XML tags, start your guest OS and make sure that the rtas_errd service is running in the guest.

Check the status of the rtas_errd' service using the following command.

# systemctl status rtas_errd

If the rtas_errd service has not started or is not running, then start the service using the following command:

# systemctl start rtas_errd

Hotplug vCPU

Before performing the hotplug operation, check the number of vCPUs online at the guest OS by using the following command.

# lscpu

Example output:

# lscpu
Architecture:          ppc64le
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    4
Core(s) per socket:    1
Socket(s):             1
NUMA node(s):          1
Model:                 2.1 (pvr 004b 0201)
Model name:            POWER8E (raw), altivec supported
L1d cache:             64K
L1i cache:             32K
NUMA node0 CPU(s):     0-3

The On-line CPU(s) list from the 'lscpu' command output provides the number of vCPUs available online.

Add the required number of vCPUs using the following hotplug command on the host OS.

      # virsh setvcpus <domain name> n --live

Where,
domain name – is the name of the guest
n – is number of vCPUs to be hotplugged/hotunplugged
--live – tells the operation should affect the live environment.

Example:

# virsh setvcpus f24 8 --live

Check number of vCPUs in guest OS using the lscpu command.

# lscpu
Architecture:          ppc64le
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    4
Core(s) per socket:    1
Socket(s):             2
NUMA node(s):          1
Model:                 2.1 (pvr 004b 0201)
Model name:            POWER8E (raw), altivec supported
L1d cache:             64K
L1i cache:             32K
NUMA node0 CPU(s):     0-7

After running the virsh setvcpus command, four vCPUs got added to the guest OS and the total number of online vCPUs increased to 8.

Hotunplug vCPU

You can use the vCPU hotunplug operation using the virsh setvcpus command for removing a given number of vCPUs.

Example:

# virsh setvcpus f24 4 --live

The above command removes four vCPUs from the guest OS.

Effect of start or stop events of guest OS

Stopping and starting the libvirtd service will not affect the dynamically added or removed vCPUs.

Rebooting the guest OS retains the vCPUs that are dynamically added or removed. Shutdown of the guest OS removes all vCPUs that are dynamically added or removed.

vCPU pinning

Processor affinity or CPU pinning, binds or unbinds a process or a thread to a CPU or a range of CPUs, and as a result, a process or a thread runs on a pinned CPU instead of any CPU.

vCPU pinning along with hotplug/hotunplug operations can be done using the following virsh vcpupin command.

      # virsh vcpupin --domain <domain name/uuid> --vcpu <number> --cpulist
      <string>

For example:

# virsh vcpupin --domain f24 --vcpu 7 --cpulist 8,16 --live

To see, whether the above command ran successfully, use the following commands.

      # virsh dumpxml <domain name> | grep vcpu
      …
      <vcpupin vcpu='7' cpuset='8,16'/>
      …
      # virsh vcpuinfo <domain name> | grep –A 3

      VCPU: 7

      CPU: 8

      State: running

      vCPU pinning can be done on a hotplugged vCPUs or on offline vCPUs.

References


Downloadable resources


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Linux
ArticleID=1043011
ArticleTitle=vCPU hotplug and hotunplug using libvirt v2
publish-date=02202017