Contents


Gaining better control in directing Live Partition Mobility (LPM) resource consumption

Controlling network traffic during LPM operations

Comments

IBM® PowerVM® Live Partition Mobility (LPM) offers the capability to trim down planned outages for IBM POWER® processor hosted workloads to almost zero and is being increasingly used by system administrators to reduce operational downtime. In order to run these operations with operational efficiency, we should consider the network that is going to route the traffic and the processor cost on the Mover Service Partitions (MSPs).

Monitoring the Virtual I/O Server (VIOS) with the topas or nmon commands can help to generate important information that you need to consider when routing LPM operations. Ideally we are looking for the least utilized network connection with the highest available bandwidth (such as a backup network) and we are looking for the least utilized VIOS from a CPU perspective. We can then pass this information to the migrlpar command to ensure optimal routing of traffic.

Starting with Hardware management console (HMC) release V7R3.5.0 or later, the migrlpar command is enhanced to offer MSP interface selection.

Environment

Listed below are the details of the IBM Power® infrastructure components used for demonstrating cases of migration.

Infrastructure details

Managed systems – 9119-MME E870s running firmware SC840_87

  • Managed Systems in demonstrated use cases are E870-Frame1 and E870-Frame2.

Management console – HMC 7042-CR9 (V8R8.4.0)

  • HMC in demonstrated use cases is named MyHMC.

Virtual I/O Server version – 2.2.4.10

  • VIOS in demonstrated use cases are Frame1-VIOS1, Frame1-VIOS2, Frame2-VIOS1, and Frame2-VIOS2.

AIX release of client – 7100-03-05

  • AIX client VM in demonstrated use cases is client-vm.

Configuration details

Each of the frame (source and target) is configured with dual VIOS setup for high availability in each frame (refer Figure 1).

Figure 1. Frames connectivity diagram

Each VIOS is configured with two shared Ethernet adapters (SEAs) as shown in Figure 2.

SEA-1

  • Is configured with failover in load sharing mode and carries multiple VLANs (202, 203, 208, and 209)

SEA-2

  • Is configured with failover in auto mode [with no VLAN tagging (untagged traffic through PVID=12)]

Each SEA is backed up by an 802.3ad Ether Channel of 10Gbit+10Gbit network interfaces.

Each VIOS is configured with two additional non-trunk virtual network interfaces, such that each connect to separate SEAs on the VIOS.

Figure 2. Virtual network layout within each frame

Network interface on both the non-trunk virtual network adapters on VIOS instances are configured with IPs on their respective virtual local area networks (VLANs).

VIOS IP schema

Frame1-VIOS1

  • en8 : inet 10.A.B.214 netmask 0xffffff00 broadcast 10.A.B.255 <= bridged through SEA-1
  • en9 : inet 10.C.D.213 netmask 0xffffe000 broadcast 10.C.E.255 <= bridged through SEA-2

Frame1-VIOS2

  • en8 : inet 10.A.B.215 netmask 0xffffff00 broadcast 10.A.B.255 <= bridged through SEA-1
  • en9 : inet 10.C.D.214 netmask 0xffffe000 broadcast 10.C.E.255 <= bridged through SEA-2

Frame2-VIOS1

  • en8 : inet 10.A.B.208 netmask 0xffffff00 broadcast 10.A.B.255 <= bridged through SEA-1
  • en9 : inet 10.C.D.208 netmask 0xffffe000 broadcast 10.C.E.255 <= bridged through SEA-2

Frame2-VIOS2

  • en8 : inet 10.A.B.209 netmask 0xffffff00 broadcast 10.A.B.255 <= bridged through SEA-1
  • en9 : inet 10.C.D.209 netmask 0xffffe000 broadcast 10.C.E.255 <= bridged through SEA-2

Objective

Our objective is to ensure that the specified VIOS instances or MSPs along with our chosen network resources (SEA and physical network ports) are used during the LPM operation and we make use of bandwidth over SEA of choice (less utilized). This also ensures that the live/data/production network traffic is not degraded due to the high load an LPM operation can generate.

We also collect performance/utilization data for network and CPU utilization during LPM operation.

Preparation

Identify which VIOS in each frame will be the MSP and therefore take the brunt of the CPU and network load. We select VIOS2 on each frame because this had a lower system utilization than its counterpart, VIOS1.

Identify the network infrastructure (SEA, physical ports) in each of the VIOS instance to be used to route the LPM traffic. SEA-2 was the less utilized of the two SEAs and so was chosen as the interface over which to route the traffic.

You could open additional remote sessions to monitor the VIOS using the nmon command. This provides a visual indication of the interfaces that are carrying the traffic as well as the load on the CPU.

The following sections demonstrate two scenarios for LPM, both using the HMC command line, migrlpar to validate and perform LPM operations. Each case would use a different command line option to identify MSPs. We would also notice how command line options used in scenario 2 provides us with finer control for network resource selection during LPM operation.

Scenario 1: LPM using MSP names (frame1 to frame2)

In this case, we would perform live partition mobility using VIOS names to identify MSPs for LPM.

We would also notice how the network infrastructure is loaded within each MSP.

Step 1. Log in to HMC as hscroot or equivalent.

Step 2. Validate migration using the command shown in Listing 1.

Listing 1. Validate migration using MSP names

hscroot@MyHMC:~> migrlpar -o v -m P870-FRAME1 -t P870-FRAME2 -p client-vm -w 1 -i
'virtual_fc_mappings="1101/Frame2-VIOS1/1//fcs0,1201/Frame2-VIOS2/2//fcs0,1102/Frame2-VIOS1/1//fcs7,1202/Frame2-VIOS2/2//fcs7",source_msp_name=Frame1-VIOS2,dest_msp_name=Frame2-VIOS2

Make a note of the parameters source_msp_name and target_msp_name. These refer to the logical partition (LPAR) names of the VIOS as defined on the HMC. These two values define which VIOS will be used as MSPs.

Step 3. Run the migration.

Listing 2. Running the migration using MSP names

hscroot@MyHMC:~> migrlpar -o v -m E870-FRAME1 -t E870-FRAME2 -p client-vm -w 1 -i
'virtual_fc_mappings="1101/Frame2-VIOS1/1//fcs0,1201/Frame2-VIOS2/2//fcs0,1102/Frame2-VIOS1/1//fcs7,1202/Frame2-VIOS2/2//fcs7",source_msp_name=Frame1-VIOS2,dest_msp_name=Frame2-VIOS2

Mentioned below is error log (errpt) from virtual machine (underwent migration) mentioning start and completion time for LPM operation.

Listing 3. Output 1 – Error log entries from virtual machine (VM)

IDENTIFIER TIMESTAMP   T C RESOURCE_NAME DESCRIPTION
A5E6DB96   0421155716  I S pmig          Client Partition Migration Completed
08917DC6   0421155616  I S pmig          Client Partition Migration Started

Figure 3 and Figure 4 show that by using MSP names we are able to load an identified VIOS (that is, VIOS2) on the source and target frames.

Figure 3. CPU usage view on the source frame VIOS instances
Figure 4. CPU usage view on target frame VIOS instances

Figure 5 and Figure 6 show the loading of network resource on the source and the target MSPs. While we identified SEA-2 (that is, ent13) to load with LPM traffic on each MSP, mere usage of the MSP name in the command line does not allow us to perform this fine selection. Hence, LPM network traffic is bridged using an un-desired SEA (that is, SEA-1 / ent12).

Figure 5. Network usage view on source MSP
Figure 6. Network usage view on target MSP

We notice from the above nmon charts that the use of an MSP name to initiate an LPM operation ensures that the required VIOS is loaded for the CPU by the LPM process. However, this does not give us any control over the network resources to be loaded with LPM traffic.

Scenario 2: LPM using an MSP IP address (from frame2 to frame1)

In this case, we would perform live partition mobility using VIOS IP addresses to identify MSPs for LPM.

We would also notice how the network infrastructure is loaded within each MSP.

Step 1. Log in to HMC as hscroot or with equivalent rights.

Step 2. Validate the migration.

Listing 4. Validate migration using an MSP IP address

hscroot@MyHMC:~> migrlpar -o v -m E870-FRAME2 -t E870-FRAME1 -p client-vm -w 1 -i
'virtual_fc_mappings="1101/Frame1-VIOS1/1//fcs0,1201/Frame1-VIOS2/2//fcs0,1102/Frame1-VIOS1/1//fcs7,1202/Frame1-VIOS2/2//fcs7",source_msp_ipaddr=10.C.D.209,dest_msp_ipaddr=10.C.D.214'

The parameters, source_msp_ipaddr and target_msp_ipaddr refer to the IP addresses of the VIOS to be used as mover service partitions during the mobility operation.

Listing 5. Running the migration using the MSP IP address

hscroot@MyHMC:~> migrlpar -o m -m E870-FRAME2 -t E870-FRAME1 -p client-vm -w 1 -i
'virtual_fc_mappings="1101/Frame1-VIOS1/1//fcs0,1201/Frame1-VIOS2/2//fcs0,1102/Frame1-VIOS1/1//fcs7,1202/Frame1-VIOS2/2//fcs7",source_msp_ipaddr=10.C.D.209,dest_msp_ipaddr=10.C.D.214'

Listing 6 shows the error log (errpt) from the VM (that underwent migration) mentioning the start and completion time for the LPM operation.

Listing 6. Output 2 – Error log entries from VM

IDENTIFIER TIMESTAMP  T C RESOURCE_NAME DESCRIPTION
A5E6DB96   0421162116 I S pmig          Client Partition Migration Completed
08917DC6   0421162116 I S pmig          Client Partition Migration Started

Figure 7 shows that by using an MSP IP address, we are able to load an identified VIOS (that is, VIOS2) on the source and target frames.

Figure 7. CPU usage view on source and target MSP

Figure 8 and Figure 9 show the loading of network resource on the source and target MSPs. Using MSP IP addresses, we are able to fine control loading of identified SEA-2 (that is, ent13) with LPM traffic on each MSP.

Figure 8. Network usage view on source MSP
Figure 9. Network usage view on target MSP

We notice from the above nmon charts that use of an MSP IP address to initiate the LPM operation ensures that the required VIOS instances and SEA are loaded with CPU and network workload respectively of the LPM operation.

Conclusion

In this article, we demonstrated two cases. In the first one, we used mover service partition names to initiate the LPM operation and it allowed us only to ensure that the required VIOS is loaded for CPU by the LPM process. But it does not give us any control when it comes to the selection of network resources to be utilized.

In the second scenario, we used mover service partition IP addresses and it was easy to appreciate how using service IP can give control over selection of network infrastructure (SEA and physical ports) to load with burst of LPM traffic (may also read as control to isolate other SEAs).

By specifying MSP IP addresses for an LPM operation, an administrator can enjoy better control over LPM traffic and this careful selection of network resources can bring in optimal performance (by using identified idle resources) and isolation of heavy LPM traffic from other workloads.

Resources


Downloadable resources


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=AIX and UNIX
ArticleID=1034184
ArticleTitle=Gaining better control in directing Live Partition Mobility (LPM) resource consumption
publish-date=06302016