KVM in a PowerVM LPAR
Kernel-based Virtual Machine (KVM) is an extra virtualization option on Power10 systems that run on PowerVM. KVM brings the power, speed, and flexibility of the KVM virtualization technology to a PowerVM logical partition (LPAR). An LPAR that runs a KVM-enabled Linux® distribution can host PPC64-LE KVM guests. The KVM Guests can use the existing resources that are assigned to the LPAR.
Minimum software levels
To enable KVM in a Power10 logical partition, you must meet the following code levels and distributions:
- Firmware level: FW1060.10
- HMC Levels: V10 R3 SP1060 or later
- KVM is enabled in Kernel 6.8 and QEMU 8.2 and is working with the following Linux distributions:
- Fedora® 40 with kernel 6.10
- Ubuntu 24.04
Capabilities
- IBM Power architecture and Power10: This implementation has advanced virtualization capabilities to run multiple operating system (OS) instances that share the same hardware resources while providing isolation. The Radix MMU architecture provides the capability to independently manage page tables for the LPAR and the KVM guest instances on the LPAR.
-
PowerVM : The industry-leading virtualization stack provides new functions to create and manage KVM guests. These changes extend the Power platform architecture to include new hypervisor interfaces.
-
Linux kernel that includes the KVM kernel module (KVM): Provides core virtualization infrastructure to run multiple virtual machines in a Linux host LPAR. Upstream kernels and enabled downstream distributions such as Fedora and Ubuntu use the newly introduced Power architecture extensions to create and manage KVM guests in the PowerLinux LPAR.
-
QEMU: User space component that implements virtual machines on the host that use KVM functions.
-
LibVirt: Provides a toolkit for virtual machine management.
Requirements
- Partition must be a Linux partition that runs in Power10 processor compatibility mode.
- Partition must be enabled for KVM:
- For HMC-managed systems, you must set the partition to KVM Capable.
- For unmanaged systems, you must set the default partition environment to Linux KVM on the BMC.
- Partition must be running in Radix mode (default MMU mode for Linux LPARs).
- Partition must be assigned with dedicated CPUs with processor sharing set to Never Allow.
- Shared processors
- vPMEM LUNs
- Platform keystore
- Live partition migration (LPM)
- Dynamic platform optimization
- Add or Remove memory, processor, and I/O DLPAR
- PCI pass-through of LPAR-attached PCI devices to KVM guests. This feature will be supported in future releases.
Device virtualization
Storage virtualization
- Virtio-blk: The virtio-blk device provides the guest with a virtual block device. This controller is older than virtio-scsi. When higher performance is required, virtio-scsi is preferred.
- Virtio-SCSI: This device presents a SCSI host bus adapter to the guest virtual machine. In contrast to virtio-blk, SCSI offers more features and flexibility. It is more scalable and can support hundreds of devices. Virtio-scsi uses standard SCSI command sets that simplify feature addition.
Both virtio-blk and virtio-scsi have a plethora of tunables available such as the number of IOThreads, physical or logical block sizes, transient disks, and other tunables. See the libvirt documentation for a complete list of tunable options that are available.
Network virtualization
There are different ways to enable external network connectivity to a KVM guest. The following networking modes can be configured for the guest:
- Uses default network of Libvirt (no additional network configuration is required).
- The simplest way to configure and access the internet or resources that are available on the local network.
- By default, the guest OS receives an IP address in the 192.168.122.0/24 address space.
- The guest is not accessible from outside of the host in the network directly. You can make the guest accessible by configuring port forwarding through iptables.
- Might not support some networking features such as ICMP (pings might not function properly).
- Other networking options can provide better performance.
- Can be configured with the following extensible markup language (XML) snippet in the guest
XML:
<interface type='network'> <source network='default'/> <model type='virtio'/> </interface>
Bridged Networking
Bridge networking (unlike user networking) allows virtual interfaces (virtio) on the guest to connect to the outside network by using a physical interface. This method makes them appear as another host to the rest of the network.
For configuring a bridged network on the guest (to forward the ssh connection through the host port 10022), a bridge must first be configured on the host as a prerequisite. Complete the following steps to create a bridge and assign a secondary device (physical interface on the host):
- Prerequisites:
- Make sure that the physical network interface of the host is unmanaged by the Network manager. If it is not unmanaged, then you must delete the connection profile of that interface first.
- Make sure that STP is not enabled on the bridge. STP, when enabled, sends off BPDU to the switch port that it is connected with. This results in disabling the port.
- Run the following set of commands to create and configure the
bridge:
ip link add <bridge_name> type bridge ip link set <phy_iface> master br0 ip link set <phy_iface> up ip link set br0 up
- After the bridge is configured properly, add the following XML configuration in the domain XML
of the guest:
<interface type='bridge'> <source bridge='<bridge_name>'/> <model type='virtio'/> </interface>
MacVtap Networking
Another alternative to using a bridge to enable a KVM guest to communicate externally is to use MacVtap networking through the Linux MacVtap driver. This interface replaces the combination of the tun/tap interface and bridge drivers with a single module that is based on the macvlan device driver.
A MacVtap endpoint is a character device that follows the tun/tap ioctl interface and can be used directly with a KVM guest. This networking mode is used to make both the guest and the host show up directly on the switch that the host is connected with. One key difference compared to using a bridge is that MacVtap connects directly to the network interface in the KVM host. This results in a shorter path that provides better performance.
- Virtual Ethernet Port Aggregator (VEPA)
- Bridge (most commonly used)
- Private mode
- Define a network by using the following network XML code:
<network> <name>macvtapnet</name> <forward dev='enP32769p1s0' mode='bridge'> <interface dev='enP32769p1s0'/> </forward> </network>
- Put the following XML code in the XML of the
guest:
<interface type='network'> <source network='macvtapnet'/> <model type='virtio'/> </interface>
How to setup KVM
Before you begin your setup, see the Minimum software levels and Requirements.
KVM host setup
KVM is a new capability that allows KVM to run inside the logical partition. KVM is available in both MDC mode and PowerVM LPAR mode.
- Connect to the eBMC ASMI GUI at https://<eBMC IP>.
- Navigate to Server power operations.
- From the Default partition environment menu, select Linux KVM.
- Click Save.
- Connect to the HMC GUI.
- Select the managed system.
- Select the LPAR.
- Click Advanced Settings.
- Select KVM Capable.
- Login in through the CLI by running the following command:
ssh hscroot@<IP>
- Run any of the following required commands:
- To enable KVM, run the following command:
chsyscfg -r lpar -m <Managed System> -i "name=<lparname>,kvm_capable=1"
- To disable KVM, run the following command:
chsyscfg -r lpar -m <Managed System> -i "name=<lparname>,kvm_capable=0"
- To check LPAR mode, run the following command:
lssyscfg -r lpar -m <Managed System> | grep <lparname>
Sample output
#lssyscfg -r lpar -m ltcden6 | grep ltcden6-lp1
name=ltcden6-lp1,lpar_id=1,lpar_env=aixlinux,state=Running,resource_config=1,os=linux,os_version=Unknown, logical_serial_num=134C9481,default_profile=default_profile,curr_profile=default_profile,work_group_id=none, shared_proc_pool_util_auth=0,allow_perf_collection=0,power_ctrl_lpar_ids=none,boot_mode=norm,lpar_keylock=norm, auto_start=0,redundant_err_path_reporting=0,rmc_state=inactive,rmc_ipaddr=,time_ref=0,lpar_avail_priority=127, desired_lp ar_proc_compat_mode=default,curr_lpar_proc_compat_mode=POWER10,sync_curr_profile=1,affinity_group_id=none, vtpm_enabled=0,migr_storage_vios_data_status=Re-collect Failed,migr_storage_vios_data_timestamp=unavailable, powervm_mgmt_capable=0,pend_secure_boot=0,curr_secure_boot=0,keystore_kbytes=0,keystore_signed_updates=0, keystore_signed_updates_without_verification=0,linux_dynamic_key_secure_boot=0,virtual_serial_num=none, kvm_capable=1,description=
- To enable KVM, run the following command:
Installing KVM Linux distribution
All fix packs that are required for KVM are available in upstream Linux kernels v6.8 and later. Ubuntu 24.04 and Fedora 40 have KVM in a PowerVM LPAR enablement.
To install KVM LPAR with Fedora or Ubuntu, you can use one of the installation options at:
https://www.ibm.com/docs/en/linux-on-systems?topic=servers-additional-installation-methods
Setup KVM on Fedora 40
Run the following commands:
# install KVM/Qemu libvirt
$ sudo dnf install -y qemu-kvm libvirt
# install virt-install
$ sudo dnf install -y virt-install
# install virt-customize, virt-sysprep, and other tools
$ sudo dnf install -y guestfs-tools
Start libvirtd service
Run the following commands:
#start libvirtd
$ sudo systemctl start libvirtd
$ sudo systemctl enable libvirtd
#verify libvirt daemon is working
$ virsh list --all
#empty output will be shown
Id Name State
Linux KVM guest bring-up
- Create a KVM guest by using an existing cloud (qcow2) image.
- Download a cloud qcow2 image from Fedora or other KVM
enabled Linux distribution repositories.
For example, https://pubmirror2.math.uh.edu/fedora-buffet/fedora-secondary/releases/40/Server/ppc64le/images/
- Prepare the cloud image and set the root password by running the following command:
$ virt-customize --add Fedora-Server-KVM-40.ppc64le.qcow2 --root-password password:passw0rd
- Create the guest with required resources. The following command example creates a guest that is
named Fedora-40 with 4 GB RAM and 4 virtual CPUs:
$ virt-install --name Fedora-40 --ram 4096 --disk path=/home/test/Fedora-Server-KVM-40.ppc64le.qcow2 --vcpus 4 --os-variant generic --network bridge=virbr0 --graphics none --console pty,target_type=serial –import
- Download a cloud qcow2 image from Fedora or other KVM
enabled Linux distribution repositories.
- Create a KVM Guest by using an installation image (ISO).
- Download a DVD-based ISO image for IBM Power ppc64le at https://fedoraproject.org/server/download.
- Create a disk image.
The following example uses qcow2 as a format (-f) with 40 GB size:
$ qemu-img create -f qcow2 Fedora-40.qcow2 40G
- Install the Linux operating system.
The following example uses ISO (Fedora-Server-dvd-ppc64le-40.iso) and disk (Fedora-40.qcow2) image that were created in previous steps:
$ virt-install --name Fedora_40 --memory 4096 --vcpus 4 --os-variant fedora40 --network bridge=virbr0 --disk path=/home/test/Fedora-40.qcow2,size=8 --graphics none --cdrom /home/test/Fedora-Server-dvd-ppc64le-40.iso
- Follow the installation screens to complete the Linux OS installation.
List of common commands
$ virsh list --all
Id Name State
------------------------------
1 Fedora_40 running
$ virsh console <guest_id>
$ virsh console 1
Connected to domain 'Fedora_40'
Escape character is ^] (Ctrl + ])
localhost login:
$ virsh start <domain-name> --console
$ virsh shutdown <domain-name>
$ virsh undefine <domain name>