Available Live Update features till AIX 7.2 TL2
Environment details for Live Update Across Frames
Live Update Across Frames functionality details
Virtual Machine Migration Requirements for Live Update
How to perform Live update across frames
At Several occasions there might be need for AIX Operating System to get upgraded by installing interim fixes (Ifixes) in form of Kernel/Kernel Extensions or by moving to a newer Service Pack/Technology level. This affects business critical workloads as the system may need a reboot for the newer version to take effect. This issue can be solved by using the Live Update (LU) operation which allows the application of Ifixes / Service Packs/Technology levels without requiring a system restart.
One major obstacle for the adoption of live update is the availability of 2X processor and memory resources on the Server which hosts the original Virtual Machine (VM). On packed systems there might not be enough free processor and/or memory to allow the creation of the surrogate VM, or not enough to run several live updates concurrently
To overcome these constraints, AIX 7.2 TL3 provides a solution by performing Live Update using an alternate Server having sufficient processor, memory to accommodate both original and surrogate. Live Update Across Frames functionality is available only for the PowerVC Based Live Update.
This blog will provide insights into how users can leverage this new functionality and detailed steps to be followed.
2. Available Live Update features till AIX 7.2 TL2
Basic knowledge of the available features is pre-cursor for new Live Update users to appreciate the new feature (Live Update across Frames) better. The evolution of Live Update features in AIX 7.2 is depicted in the figure below.
Figure 1: Evolution of Live Update
To learn more about each of these features, the following links can be referred.
Introduction to AIX live update in AIX 7.2 TL0
IBM Knowledge Center AIX 7.2 live update in AIX 7.2 TL0
Blog: Introduction to live update in AIX 7.2 TL0
Blog: Live update with Service pack / Technical level PTF’s install /update in AIX 7.2 TL1
Blog: Live update with PowerVC in AIX 7.2 TL2
3. Environment details for Live Update Across Frames
Prerequisites for the Live Update Across Frames feature
1. System Firmware
2. IBM Power Virtualization Center (PowerVC)
- 18.104.22.168 and later
3. IBM PowerVM Novalink
- 22.214.171.124 and later
4. Hardware Management Console (HMC)
- 860 SP2
- 920 and later
5. Virtual I/O Server (VIOS)
- 126.96.36.199 and later
4. Live Update Across Frames functionality details
When there is a shortage of free system resources on the original server to accommodate both the original and the surrogate VMs during a Live Update operation, administrators can either choose a different destination server or let Live Update pick one based on certain pre-defined parameters in order to complete the operation. Next section elaborates those factors in details.
The whole process works as follows:
- VM migration to a destination Server (Step 1)
- Live Update at the destination Server (Step 2)
- VM migration back to Original Server (Step 3)
PowerVC provides the flexibility to choose the destination Server irrespective of whether it is HMC managed or Novalink managed. For example, Source Server can be a PowerVC-HMC managed Server and destination can be a PowerVC-NOVA managed server or vice-versa.
We will discuss the procedure with the help of an example. In the flowchart in Figure 2, VM1 is the node which needs to undergo LU. VM1 is hosted by Server1. LU always requires 2X system resources to accommodate both the original and surrogate VMs. If the administrators observe that Server1 does not have 2X free resources and Server2 does, they can opt for Live Update Across Frames feature.
- In Step1, VM1 migrates to Server2, using Live Migration feature of PowerVC (also known as Live Partition Mobility or LPM).
- In Step2, Live Update is performed on Server2. Post end of the Live Update, VM1 runs with updated kernel. All resources used by the original VM are freed up.
- In Step3, VM1 is migrated back to Server1 using PowerVC Live Migration again. After the Live Migration, VM1 does not consume any resources on Server2 anymore.
Figure 2: Live Update Across Frames flowchart.
The requirements for VM migration are explained in next section. if user is familiar with VM migration with PowerVC environment can skip next section and go to section 6 "how to perform Live Update Across Frames".
5. VM Migration Requirements for Live Update
Live Update Across Frames operation utilizes existing PowerVC VM Migration functionality. Following requirements must be fulfilled for making the setup migration capable.
1. Host Requirements
- Enough Processor and memory resource as to be on the destination Host.
- Migration can be performed on VM whose HOST is either managed by HMC, Nova link
- Logical memory block size as to be same on the source and destination Host.
- Host should be in active state.
- VM processor compatibility mode should be supported on Destination Host
Host and Host group
A Host group can be used to group logical Hosts regardless of the underlying architecture (system, network, storage configurations) that may or may not be common. Only requirement of adding a host to host group is host should be managed by the same PowerVC. Any newly added host will part of ‘default’ host group. Host can part of only one Host group; existing host can be added to any other Host group.
Every host group has an attribute called placement policy.VM can be migrated to Host within Host Group only. Destination Host for Migration will be selected based on the placement policy set for the Host Group.
For more information on the Host Group and Placement policy refer below links,
2. Storage Requirements
Original and Destination Host must have same Storage Connectivity Group(SCG) configuration.
Number of VIOS, Fiber Channel (FC) ports configuration in the Fabric must be same in destination host as in original host.
Figure 3. Representation of SCG with Fabric connection.
Figure 3 shows an example of Fiber Channel (FC) port and Storage Connectivity Group (SCG) configuration. In this figure, HOST1 and HOST2 have 2 VIOS each and FC adapters are connected to different fabric. We have two SCG named TEST and DEV. TEST will use only VIOS A1 and VIOS B2 FC ports connected to Fabric A. SCG DEV will use all VIOS’s and FC ports connected to Fabric A and B.
FC port channel configuration and port tagging
All Fiber channel ports connected to the VIOS in the Hosts are listed in the PowerVC configuration à Fibre channel port configuration page. Depending on the Storage connectivity with fabric requirement, FC ports are tagged.
In Screenshot 7, a single FC port of VIOS in both the Hosts has been tagged with label “ZZ1-ZZ5” which are connected to the same fabric. More information about FC port tagging can be found in PowerVC Config FC ports HMC
Screenshot 7: Fiber Channel Port Tagging
Storage Connectivity Group with FC Port Tag
Storage connectivity group will be created with sets of VIOS, Fiber Channel (FC) fabrics and ports for deployment and migration of VM.
In Screenshot 8, “VSCSI-NPIV” SCG is created with VSCSI mapping as boot volume and NPIV as Data volume. Disks attachment to the VM is restricted with adding the VIOS’ and FC port labeled “ZZ1-ZZ5”.
In Screenshot 9,Addition of VIOS from different Hosts to group will allow the Migration of VM to another Host in SCG. You can read more about Storage connectivity Group in powervc_storage_connectivity_groups
Screenshot 8: SCG creation with Port Tag ZZ1-ZZ5
Screenshot 9: Different Host’s VIOS addition to SCG
3. Network Requirements
- Both the Source and Destination Host should be compatible from Network perspective. This means the Virtual Networks which are part of the VM undergoing LU, should be available in the destination server as well. Also, VLAN IDs of Virtual Networks should be bridged to Shared Ethernet Adapter (SEA) in both the source and destination VIOS of Hosts.
- Source and destination Hosts should be connected to the Same Virtual Switch/Sub network.
- SR-IOV Networks should be supported in both source and destination Host.
- Source and destination Host SR-IOV adapter’s Physical port must have same port label, Same type of Physical adapter, Number of logical ports and bandwidth allocated to the Virtual machine.
4. Remote Management and Control (RMC) Connection
Virtual machine and VIOS’s state and RMC Status should be active.
The users who are planning for VM migration should also be knowing the restrictions below.
- VM migration is not supported to the Host which in not part of Original Host’s Host group.
- Collocation rule with affinity or anti affinity will restrict the virtual machine migration to destination Host.
More information can be found about this topic in powervc_relocation_reqs_hmc.
6. How to perform Live Update Across Frames
Changes outlined below have been added to the existing LU functionality in order to support the Across Frames feature. VM will be migrated to the user-specified destination Host or a Host selected from source Host’s Host Group.
A. Modifications of the lvupdate.data file in /var/adm/ras/liveupdate location
To support live update across frames two new attributes have been added to pvc stanza in the lvupdate.data file.
1. The destination attribute
The destination attribute provides the destination Host information it can be set to either,
- Destination host name as in PowerVC host page
If destination Host name is provided and all migration pre-requisites are satisfied, then VM will be migrated to the destination host before starting LU.
If “ANY” option is provided in the destination attribute and all migration pre-requisites are satisfied, then VM will be migrated to a destination host selected by the Host Group Placement policy.
2. The force_migration attribute
The force_migration attribute forces Live Update to happen on another host even if sufficient resources are available on the current host.
The force_migration attribute can be either “yes”, “no”, or missing/empty. The missing/empty value is treated as “no”. Any other value is invalid.
In Screenshot 10, we can see how lvupdate.data file modified when destination Host is specified as ANY
Screenshot 10: liveupdate.data file with ANY option
In Screenshot 11, the lvupdate.data file is set with Host name for destination attribute
Screenshot 11: The lvupdate.data with destination attribute set with Host name
B. Live Update Across Frames operation from the VM telnet Console
Following screenshots explain the LU flow that is observed from the telnet console of VM.
Screenshot 13: LU authentication, geninstall command and LU preview checks
In screenshot 15, VM migrated to destination Host , LU operation performed and VM migrated back to source Host.
C. Live update Operation view from PowerVC Graphical User Interface (GUI)
Following screenshots explain the VM migration process as observed from the PowerVC_GUI
Stage 1: VM Migrating to the Destination Host fvt-zz1-vnova from fvt-zz5-nova Host
Screenshot 16: VM is migrating to destination host
Screenshot 17: VM has migrated to the destination host
Screenshot 18: Surrogate created with temporary IP address and the LU is in progress.
Stage 4: VM Migrating back to Source host
Screenshot 19: VM is migrating back to source host with Original IP address.
Stage 5: LU operation Completed
Screenshot 20: Post LU completion, the VM will be back at the original host
In screenshot 21, Shows the PowerVC GUI Message’s requested by the live update.
Screenshot 21: PowerVC GUI messages
This article has provided an insight into performing Live Update of AIX Operating System without system reboot even when there is no availability of sufficient free processor and memory resources on the original Server (i.e. Host). Live Update Across Frames allows administrators to choose a different destination Server with adequate resources to perform Live Update. Live Update Across Frames is only supported on the PowerVC managed Hosts and uses PowerVC’s VM Migration feature.
Authors: Sharath kumar N R, Sanchita Sinha, Madusudanan Vasudevan, Kasinadh P Divvela