Live Partition Mobility requirements

This page has not been liked. Updated 4/12/13 11:47 AM by OneSkyWalkerTags: None

Missing image The contents of this web page solely reflect the personal views of the authors and do not necessarily represent the views, positions, strategies or opinions of IBM or IBM management. To provide feedback, please use the Comments (0) link at the bottom of the page, then use the Add a comment link to add a comment. Note: Until you sign in with your IBM ID (using the link in the upper right corner of the web page), you will not see the Add a comment link and you can not add a comment. If you do not already have an IBM ID, use the Need an IBM ID? link in the sign in window to obtain one. Registration for an IBM ID is quick and easy. To contact an author directly (without adding a comment) after signing up and logging in, hover the cursor over a link to an author's profile (eg, Updated Saturday at 10:28 PM by OneSkyWalker), click on the Missing image box which pops up, then click the Invite to My Network link in the business card box which pops up. The invitation you send can include a private note to the author.

Live Partition Mobility requirements

The requirements for successful migration of active LPARs between Power Systems servers using PowerVM Live Partition Mobility include the following:

Both source and target server must have POWER6 CPUs and both must have PowerVM Enterprise Edition installed. JS22 blades are eligible (but not JS21)

If HMC-managed, both servers must be managed by the same HMC until Release 7 Version 3.4 of the HMC code, which introduced the ability to migrate between different HMCs. If IVM-managed, neither server can be HMC-managed. Integrated Virtualization Manager (IVM) is a component of PowerVM. (Thanks to VinceJohnson for his comment on 9/24/2010 suggesting improvements to this paragraph.)

LPAR to be moved must be running AIX V5.3, AIX V6.1, or Linux

Virtual I/O servers on the two servers must be able to communicate over the network

In the move preparation phase, the contents of an LPAR's memory must be transmitted over virtual Ethernet (via Virtual I/O Servers) from the source LPAR to the target LPAR. Pages updated after transmission must be transmitted again. Assuming a Gigabit Ethernet can transmit 80 MB/sec and that 256 GB of working memory is in use on the source LPAR, it will take at least 256*1024/80 = 3276.8 seconds or one hour to prepare for the move if only a gigabit of network bandwidth is available. That's assuming source working memory pages are not updated after they are transmitted, which is NOT a good assumption, so actual time will be longer. The Virtual I/O Servers will consume roughly one CPU on the source and one on the target managing the data transmission. (See VIOS Sizing for more information on estimating CPU consumption by a VIO Server.) And unless sufficient network bandwidth is available to accommodate the additional network activity, application network bandwidth and response time will be impacted during the move preparation phase.

Once the move preparation phase is complete, only a few seconds are required to actually move the LPAR.

All I/O must be virtualized in an LPAR to be moved - any dedicated PCI adapters must be deallocated before the LPAR can be moved

AIX must boot from LUNs accessible to both virtual I/O servers. Considerations when booting from SAN suggest that AIX dump space be configured on a SCSI hdisk dedicated to the LPAR or on a vSCSI disk which is mapped to an internal SCSI disk dedicated to a virtual I/O server LPAR. AIX dump space must therefore be deallocated (or perhaps reallocated to a LUN) prior to moving the LPAR and (re)allocated to a (different) dedicated SCSI disk after moving the LPAR.

Network switches and routers must support and properly handle gratuitous ARP packets, so that when the network sees the LPAR's IP (and MAC?) addresses jump from one network port to another, the jump will be handled properly.