KVM in a PowerVM LPAR

Kernel-based Virtual Machine (KVM) is an extra virtualization option on Power10 systems that run on PowerVM. KVM brings the power, speed, and flexibility of the KVM virtualization technology to a PowerVM logical partition (LPAR). An LPAR that runs a KVM-enabled Linux® distribution can host PPC64-LE KVM guests. The KVM Guests can use the existing resources that are assigned to the LPAR.

Figure 1. IBM Power Systems stack

Minimum software levels

To enable KVM in a Power10 logical partition, you must meet the following code levels and distributions:

  • Firmware level: FW1060.10
  • HMC Levels: V10 R3 SP1060 or later
  • KVM is enabled in Kernel 6.8 and QEMU 8.2 and is working with the following Linux distributions:
    • Fedora® 40 with kernel 6.10
    • Ubuntu 24.04

Capabilities

Figure 2. Industry Standard Linux Virtualization Stack
KVM in a PowerVM LPAR utilizes the industry standard Linux KVM virtualization stack and can easily integrate within an existing Linux virtualization ecosystem. KVM in an LPAR is enabled by:
  • IBM Power architecture and Power10: This implementation has advanced virtualization capabilities to run multiple operating system (OS) instances that share the same hardware resources while providing isolation. The Radix MMU architecture provides the capability to independently manage page tables for the LPAR and the KVM guest instances on the LPAR.
  • PowerVM : The industry-leading virtualization stack provides new functions to create and manage KVM guests. These changes extend the Power platform architecture to include new hypervisor interfaces.

  • Linux kernel that includes the KVM kernel module (KVM): Provides core virtualization infrastructure to run multiple virtual machines in a Linux host LPAR. Upstream kernels and enabled downstream distributions such as Fedora and Ubuntu use the newly introduced Power architecture extensions to create and manage KVM guests in the PowerLinux LPAR.

  • QEMU: User space component that implements virtual machines on the host that use KVM functions.

  • LibVirt: Provides a toolkit for virtual machine management.

Requirements

  • Partition must be a Linux partition that runs in Power10 processor compatibility mode.
  • Partition must be enabled for KVM:
    • For HMC-managed systems, you must set the partition to KVM Capable.
    • For unmanaged systems, you must set the default partition environment to Linux KVM on the BMC.
  • Partition must be running in Radix mode (default MMU mode for Linux LPARs).
  • Partition must be assigned with dedicated CPUs with processor sharing set to Never Allow.
The following features are not supported on KVM logical partitions:
  • Shared processors
  • vPMEM LUNs
  • Platform keystore
  • Live partition migration (LPM)
  • Dynamic platform optimization
  • Add or Remove memory, processor, and I/O DLPAR
The following KVM features for guests are not supported:
  • PCI pass-through of LPAR-attached PCI devices to KVM guests. This feature will be supported in future releases.

Device virtualization

Storage virtualization

In KVM guests, the disks available are based on virtIO (virtual I/O) by default. The two emulated storage controllers that are commonly used are virtio-blk and virtio-scsi.
  1. Virtio-blk: The virtio-blk device provides the guest with a virtual block device. This controller is older than virtio-scsi. When higher performance is required, virtio-scsi is preferred.
  2. Virtio-SCSI: This device presents a SCSI host bus adapter to the guest virtual machine. In contrast to virtio-blk, SCSI offers more features and flexibility. It is more scalable and can support hundreds of devices. Virtio-scsi uses standard SCSI command sets that simplify feature addition.
Figure 3. Storage virtualization

Both virtio-blk and virtio-scsi have a plethora of tunables available such as the number of IOThreads, physical or logical block sizes, transient disks, and other tunables. See the libvirt documentation for a complete list of tunable options that are available.

Network virtualization

There are different ways to enable external network connectivity to a KVM guest. The following networking modes can be configured for the guest:

User Networking: This is the default networking configuration (virtio-net) for a KVM guest where the guest OS has access to the network services by using network address translation (NAT). The following characteristics or use cases are for user networking:
  1. Uses default network of Libvirt (no additional network configuration is required).
  2. The simplest way to configure and access the internet or resources that are available on the local network.
  3. By default, the guest OS receives an IP address in the 192.168.122.0/24 address space.
  4. The guest is not accessible from outside of the host in the network directly. You can make the guest accessible by configuring port forwarding through iptables.
  5. Might not support some networking features such as ICMP (pings might not function properly).
  6. Other networking options can provide better performance.
  7. Can be configured with the following extensible markup language (XML) snippet in the guest XML:
        <interface type='network'>
          <source network='default'/>
          <model type='virtio'/>
        </interface>
    

Bridged Networking

Bridge networking (unlike user networking) allows virtual interfaces (virtio) on the guest to connect to the outside network by using a physical interface. This method makes them appear as another host to the rest of the network.

For configuring a bridged network on the guest (to forward the ssh connection through the host port 10022), a bridge must first be configured on the host as a prerequisite. Complete the following steps to create a bridge and assign a secondary device (physical interface on the host):

  1. Prerequisites:
    1. Make sure that the physical network interface of the host is unmanaged by the Network manager. If it is not unmanaged, then you must delete the connection profile of that interface first.
    2. Make sure that STP is not enabled on the bridge. STP, when enabled, sends off BPDU to the switch port that it is connected with. This results in disabling the port.
  2. Run the following set of commands to create and configure the bridge:
    ip link add <bridge_name> type bridge 
    ip link set <phy_iface> master br0
    ip link set <phy_iface> up
    ip link set br0 up
    
  3. After the bridge is configured properly, add the following XML configuration in the domain XML of the guest:
        <interface type='bridge'>
          <source bridge='<bridge_name>'/>
          <model type='virtio'/>
        </interface>
    
Warning: Bridging to VETH network interfaces is not currently supported. These interfaces use a fixed burned-in source MAC address. Any Ethernet packet without the correct source MAC address is dropped and an error is returned to host-os.

MacVtap Networking

Another alternative to using a bridge to enable a KVM guest to communicate externally is to use MacVtap networking through the Linux MacVtap driver. This interface replaces the combination of the tun/tap interface and bridge drivers with a single module that is based on the macvlan device driver.

A MacVtap endpoint is a character device that follows the tun/tap ioctl interface and can be used directly with a KVM guest. This networking mode is used to make both the guest and the host show up directly on the switch that the host is connected with. One key difference compared to using a bridge is that MacVtap connects directly to the network interface in the KVM host. This results in a shorter path that provides better performance.

You can configure MacVtap in three different modes:
  • Virtual Ethernet Port Aggregator (VEPA)
  • Bridge (most commonly used)
  • Private mode
To configure MacVtap, complete the following steps:
  1. Define a network by using the following network XML code:
    <network>
      <name>macvtapnet</name>
      <forward dev='enP32769p1s0' mode='bridge'>
        <interface dev='enP32769p1s0'/>
      </forward>
    </network>
    
  2. Put the following XML code in the XML of the guest:
        <interface type='network'>
          <source network='macvtapnet'/>
          <model type='virtio'/>
        </interface>
    

How to setup KVM

Before you begin your setup, see the Minimum software levels and Requirements.

KVM host setup

KVM is a new capability that allows KVM to run inside the logical partition. KVM is available in both MDC mode and PowerVM LPAR mode.

You can enable KVM by using the eBMC ASMI GUI, complete the following steps:
  1. Connect to the eBMC ASMI GUI at https://<eBMC IP>.
  2. Navigate to Server power operations.
  3. From the Default partition environment menu, select Linux KVM.
  4. Click Save.
You can enable KVM by using the HMC GUI, complete the following steps:
Note: The following operation is only available on a deactivated LPAR. Make sure that the HMC console for the LPAR is disconnected.
  1. Connect to the HMC GUI.
  2. Select the managed system.
  3. Select the LPAR.
  4. Click Advanced Settings.
  5. Select KVM Capable.
You can enable or disable KVM by using the HMC command line interface (CLI), complete the following steps:
  1. Login in through the CLI by running the following command: ssh hscroot@<IP>
  2. Run any of the following required commands:
    • To enable KVM, run the following command: chsyscfg -r lpar -m <Managed System> -i "name=<lparname>,kvm_capable=1"
    • To disable KVM, run the following command: chsyscfg -r lpar -m <Managed System> -i "name=<lparname>,kvm_capable=0"
    • To check LPAR mode, run the following command: lssyscfg -r lpar -m <Managed System> | grep <lparname>

      Sample output

      #lssyscfg -r lpar -m ltcden6 | grep ltcden6-lp1

      name=ltcden6-lp1,lpar_id=1,lpar_env=aixlinux,state=Running,resource_config=1,os=linux,os_version=Unknown,
      logical_serial_num=134C9481,default_profile=default_profile,curr_profile=default_profile,work_group_id=none,
      shared_proc_pool_util_auth=0,allow_perf_collection=0,power_ctrl_lpar_ids=none,boot_mode=norm,lpar_keylock=norm,
      auto_start=0,redundant_err_path_reporting=0,rmc_state=inactive,rmc_ipaddr=,time_ref=0,lpar_avail_priority=127,
      desired_lp ar_proc_compat_mode=default,curr_lpar_proc_compat_mode=POWER10,sync_curr_profile=1,affinity_group_id=none,
      vtpm_enabled=0,migr_storage_vios_data_status=Re-collect Failed,migr_storage_vios_data_timestamp=unavailable,
      powervm_mgmt_capable=0,pend_secure_boot=0,curr_secure_boot=0,keystore_kbytes=0,keystore_signed_updates=0,
      keystore_signed_updates_without_verification=0,linux_dynamic_key_secure_boot=0,virtual_serial_num=none,
      kvm_capable=1,description=

Installing KVM Linux distribution

All fix packs that are required for KVM are available in upstream Linux kernels v6.8 and later. Ubuntu 24.04 and Fedora 40 have KVM in a PowerVM LPAR enablement.

To install KVM LPAR with Fedora or Ubuntu, you can use one of the installation options at:

https://www.ibm.com/docs/en/linux-on-systems?topic=servers-additional-installation-methods

Setup KVM on Fedora 40

Run the following commands:

# install KVM/Qemu libvirt
$ sudo dnf install -y qemu-kvm libvirt

# install virt-install
$ sudo dnf install -y virt-install

# install virt-customize, virt-sysprep, and other tools
$ sudo dnf install -y guestfs-tools

Start libvirtd service

Run the following commands:

#start libvirtd
$ sudo systemctl start libvirtd
$ sudo systemctl enable libvirtd

#verify libvirt daemon is working
$ virsh list --all
#empty output will be shown 
 Id   Name   State

Linux KVM guest bring-up

You can use one of the following methods to install and setup a KVM guest:
  1. Create a KVM guest by using an existing cloud (qcow2) image.
    1. Download a cloud qcow2 image from Fedora or other KVM enabled Linux distribution repositories.

      For example, https://pubmirror2.math.uh.edu/fedora-buffet/fedora-secondary/releases/40/Server/ppc64le/images/

    2. Prepare the cloud image and set the root password by running the following command: $ virt-customize --add Fedora-Server-KVM-40.ppc64le.qcow2 --root-password password:passw0rd
    3. Create the guest with required resources. The following command example creates a guest that is named Fedora-40 with 4 GB RAM and 4 virtual CPUs: $ virt-install --name Fedora-40 --ram 4096 --disk path=/home/test/Fedora-Server-KVM-40.ppc64le.qcow2 --vcpus 4 --os-variant generic --network bridge=virbr0 --graphics none --console pty,target_type=serial –import
  2. Create a KVM Guest by using an installation image (ISO).
    1. Download a DVD-based ISO image for IBM Power ppc64le at https://fedoraproject.org/server/download.
    2. Create a disk image.

      The following example uses qcow2 as a format (-f) with 40 GB size: $ qemu-img create -f qcow2 Fedora-40.qcow2 40G

    3. Install the Linux operating system.

      The following example uses ISO (Fedora-Server-dvd-ppc64le-40.iso) and disk (Fedora-40.qcow2) image that were created in previous steps: $ virt-install --name Fedora_40 --memory 4096 --vcpus 4 --os-variant fedora40 --network bridge=virbr0 --disk path=/home/test/Fedora-40.qcow2,size=8 --graphics none --cdrom /home/test/Fedora-Server-dvd-ppc64le-40.iso

    4. Follow the installation screens to complete the Linux OS installation.

List of common commands

To list the running guests, run the following command:
$ virsh list --all
 Id   Name          State
------------------------------
 1    Fedora_40     running
To connect to the guest console, run the following command:
$ virsh console <guest_id>  
$ virsh console 1
Connected to domain 'Fedora_40'
Escape character is ^] (Ctrl + ])

localhost login: 
To start a guest, run the following command:
$ virsh start <domain-name> --console
To stop a guest, run the following command:
$ virsh shutdown <domain-name>
To delete a guest, run the following command:
$ virsh undefine <domain name>

Known issues