Hybrid Network Virtualization

Single Root I/O Virtualization (SR-IOV) allows a single I/O adapter to be shared concurrently with multiple logical partitions, providing hardware level speeds with no additional CPU usage, since the adapter virtualization is enabled by the adapter at the hardware level. This performance comes at the cost of the loss of Live Partition Mobility (LPM) functionality. Hybrid Network Virtualization (HNV) uses Linux® active-backup bonding to allow LPM for the partitions that are configured with an SR-IOV logical port.

For more information about HNV, see Hybrid Network Virtualization - Using SR-IOV for Optimal Performance and Mobility.

Requirements for HNV

The following requirements and conditions must be met to perform the HNV operation:

  • Hardware Management Console (HMC) Version 9 Release 2 Maintenance Level 950, or later
  • Virtual I/O Server (VIOS) version 3.1.2.0, or later
  • Power Hypervisor with firmware at level FW950, or later
  • Powerpc-utils version 1.3.8, or later for RHEL 8.4+, SLES15 SP3
  • Powerpc-utils version 1.3.10, or later for SLES15 SP4+
  • Backend virtual device support
    • IBM® virtual Ethernet device (ibmveth)
      • SLES15 SP3, or later
      • RHEL8.4, or later
      • RHEL9.0, or later
    • IBM® virtual network interface (ibmvNIC) supported
      • RHEL8.6, or later
      • RHEL9.0 or later
  • DynamicRM-2.0.7-7.ppc64le.rpm
  • Bonding module

HNV configuration

To prepare your system for the HNV operation, complete the following steps:
  1. DynamicRM package is responsible for passing commands from the HMC to the logical partition (LPAR). To support the HNV feature, new commands are added to the DynamicRM package. To verify the version of the DynamicRM package, run the following command:
    rpm -qa DynamicRM

    Download the latest DynamicRM package from the IBM Power Systems service and productivity tools.

  2. The HNV feature depends on distribution's connection manager to create and manage the bonding interfaces for migratable SR-IOV ports. Red Hat Enterprise Linux (RHEL) distribution ships with NetworkManager by default.
  3. The SUSE Linux Enterprise Server (SLES) distribution ships with Wicked by default as the connection manager. Wicked is supported in the HNV feature, from SLES 15 SP4 or later.
Note: NetworkManager is available in SUSE Linux Enterprise Server (SLES) 15 SP3 as the option to manage connections. SLES comes preinstalled with Wicked as the connection manager.

You can use one of the following steps to enable NetworkManager on SUSE Linux Enterprise Server 15 SP3:

  1. The preferred method is to choose NetworkManager as the default connection manager when you are installing the operating system. To select NetworkManager, select Desktop-Application packages, and then select NetworkManager when you run the SUSE Linux Enterprise Server 15 SP3 installer.
  2. You can switch from Wicked to NetworkManager for a server that is installed with SUSE Linux Enterprise Server 15 SP3. Complete the following steps by using a virtual terminal (VTERM) session from the HMC to the LPAR. These steps are required since the existing network interfaces must be reconfigured as part of this switch:
    • For SUSE Linux Enterprise Server 15 SP3, the NetworkManager package is available in the Desktop-Applications repository.

      To add the Desktop-Applications repository to your configuration, create a repository configuration file that is similar to the following file:
      #cat /etc/zypp/repos.d/Desktop-Applications-Module_15.3-0.repo
      [Desktop-Applications-Module_15.3-0]
      name=sle-module-desktop-applications
      enabled=1
      autorefresh=1
      baseurl=nfs://192.168.100.10//net/install/linuxsuse_sles15le_SP3
      path=/Module-Desktop-Applications
      type=rpm-md
      keeppackages=0
      After you create the file, run the following command to add the file to the configuration:
      zypper addrepo /etc/zypp/repos.d/Desktop-Applications-Module_15.3-0.repo
      After you create a repository of the desktop application on your LPAR, install the network manager package by running the following command:
      zypper install NetworkManager*
    • To disable Wicked, run the following commands. After you run these commands, existing network interfaces are disabled:
      systemctl stop wicked 
      systemctl disable wicked
    • To start NetworkManager and the HNV boot time service, run the following command:
      systemctl enable NetworkManager
      systemctl enable hcn-init.service
    • To verify whether the HNV boot time service is enabled, run the following command:
      systemctl is-enabled hcn-init.service
      systemctl is-enabled NetworkManager  

If you install NetworkManager and then switch to wicked, the network interfaces that are created by NetworkManager are not recognized by wicked. The interfaces that are configured during the installation of NetworkManager for public connectivity is recognized when you switch from NetworkManager to wicked.

To configure the HNV operation, complete the following steps:
  1. To create the HNV device by using the HMC GUI, complete the following steps:

    From the HMC GUI, follow the steps that are described in Adding SR-IOV logical ports. Select the migratable option and choose the virtual Ethernet network as the backup interface.

    The new HNV devices become visible to the partition after activation of the partition profile, or when the migratable logical port is dynamically added to the partition. After the HNV devices are visible to the partition, the HMC automatically triggers the configuration steps that are specific to the operating system.

  2. The powerpc-utils utility automatically creates Linux active-backup bonding after you create a migratable SR-IOV port by using the HMC. You do not need to manually create or maintain the bonding controller and bonding worker. Do not change the name of the bond interface that is created automatically, since the interface name must be consistent with the configuration of the platform.
  3. To configure the IP address for a HNV device and to activate the bond, run the following command:
    nmcli c
    nmcli c mod id bond7e03c969 ipv4.method manual ipv4.address 192.168.2.203/24
    nmcli c up bond7e03c969

Migration

During LPM, the HMC removes the SR-IOV logical port from the partition configuration so that the virtual device becomes the active device. The partition can be migrated after the SR-IOV logical port is removed from the partition. The migration can be a live migration, an inactive migration, or a remote restart operation. At the destination partition, a new SR-IOV logical port is added to the HNV device and the new SR-IOV port becomes the active network interface for the destination partition.

Known issues

  • You cannot install the network through HNV bonding primary interface as network installation over bonding interface is not supported. You can use the SR-IOV secondary interface directly for network installation. However, you must manually clean up the SR-IOV secondary interface and reconfigure the HNV network configurations for the network installation to go through.
  • On RHEL8.4, RHEL8.6, RHEL9.0, and SLES15 SP3, if you remove an HNV device with network manager when the LPAR is offline, then you must manually cleanup the network configurations that are associated with the LPAR after the LPAR is back online.
  • On RHEL8.4, RHEL8.6, and RHEL9.0, and SLES15 SP3, if the SR-IOV device is not set up as the primary interface and failover occurs, then you can manually set the SR-IOV device as the primary interface of the HNV bonding.

    The following example shows failover with the IP link set as down or up on the SR-IOV device with ID enP16385p1s0:
    [root@ltcrain41-lp3 ~]# ip link set dev enP16392p1s0 down 
    [root@ltcrain41-lp3 ~]# ip link set dev enP16392p1s0 up
    You can workaround this issue by running the following command:
    echo enP16392p1s0 >/sys/class/net/bond191554c8/bonding/primary

Limitations

  • Maximum of 10 vNIC devices or Migratable SR-IOV logical ports are supported.
  • Only Mellanox SR-IOV adapters are supported for the primary interface.
  • To use HNV and vPMEM in the same logical partition, the following Linux operating system (OS) levels are required:
    • Red Hat Enterprise Linux 9.4, or later
    • SUSE Linux Enterprise Server 15, Service Pack 6, or later