Upgrading the Virtual I/O Server - SSP cluster
Learn about the process of upgrading or migrating the Virtual I/O Server (VIOS), when the VIOS belongs to a Shared Storage Pool (SSP) cluster.
- Non-disruptive upgrades
- Disruptive upgrades
Non-disruptive upgrades
A general recommendation is that Virtual I/O configurations must be through dual VIOS environments. This configuration ensures that an alternative path is always available for I/O communication from client logical partitions in case the primary path goes offline. For non-disruptive upgrades, you can start the upgrade of all Virtual I/O Servers in the primary path, while keeping the Virtual I/O Servers in the alternative path active. During the upgrade process, the cluster and exported Logical Units (LUs) remain available to the client logical partitions through VIOS cluster nodes in the alternative path. Client logical partitions can actively read and write to the SSP Logical Units through other available Virtual I/O paths. After you upgrade the primary Virtual I/O Servers and adding them back to the cluster by using the viosbr -restore command, you can upgrade the Virtual I/O Servers in the alternative path by repeating the same process.
- Upgrade all the SSP nodes from VIOS versions 2.2.4.x, or later to VIOS version 2.2.6.30, or later, where the VIOS version must be equal to, or greater than 2.2.6.30 and less than version 3.1. After you upgrade all the nodes, wait for the rolling upgrade process to complete in the background, where the contents of the old database are migrated to the new database. The rolling upgrade process is purely internal to the SSP cluster and no action is necessary from you to start this process.
- As a second step, upgrade all the SSP nodes from VIOS version 2.2.6.30, or later to VIOS version 3.1.0.00, or later.
- Using the updateios command:
- The updateios command updates the VIOS to the necessary maintenance level. You do not need to take a backup or restore the VIOS metadata, as no new installation takes place.
- You can upgrade the VIOS to version 2.2.6.30, by
using the following command:
updateios -dev <update image location>
For example:
updateios -dev /home/padmin/update
- Using the viosupgrade command from NIM Master – bosinst
method:
The viosupgrade command from NIM Master is not supported in shared storage pool cluster environment for VIOS versions earlier than 2.2.6.30. You can update the VIOS by using the updateios command or by performing a Manual Backup-Install-Restore.
- Manual Backup-Install-Restore:
For more information about this method, see Traditional method - manual.
cluster -status -verbose
.If the status of the SSP cluster is UP_LEVEL, your cluster nodes (Virtual I/O Servers) are not ready for migration to VIOS version 3.1.
During the 2-step upgrade process, to upgrade from a VIOS that is at a version earlier than 2.2.6.30 to a VIOS version 3.1, or later, it is mandatory for the cluster to be at ON_LEVEL for all the SSP cluster nodes after the first upgrade step takes the cluster nodes to version 2.2.6.30, or later. When the last node in the cluster gets upgraded to level 2.2.6.30, or later, the SSP internal process called Rolling Upgrade starts and migrates the contents of the SSP database from the older version to the currently installed version. The ON_LEVEL status for all SSP cluster nodes indicates the completion of step 1 of the upgrade process.
- Using the
viosupgrade -bosinst
command from NIM Master:- You can upgrade the VIOS to version 3.1.0.00, by
using the following
command:
viosupgrade -t bosinst -n <hostname> -m <mksysb_image> -p <spot_name> -a <hdisk> -c
For example:
viosupgrade -t bosinst -n vios1 -m vios_3.1.0.0 -p vios_3.1.0.0_spot -a hdisk1 -c
- You can check the status of the VIOS installation by using the viosupgrade -q vios1 command.
- You can upgrade the VIOS to version 3.1.0.00, by
using the following
command:
- Using the
viosupgrade -altdisk
command from NIM Master:- To avoid downtime during VIOS installations, you can
use the NIM altdisk method. This method preserves the current rootvg image and installs
the VIOS on a new disk by using the
alt_disk_mksysb
method. - You can upgrade the VIOS to version 3.1.0.00, by
using the following
command:
viosupgrade -t altdisk -n <hostname> -m mksysb_image> -p <spot_name> -a <hdisk> -c
For example:
viosupgrade -t altdisk -n vios1 -m vios_3.1.0.0 -p vios_3.1.0.0_spot -a hdisk1 -c
- You can check the status of the VIOS installation by
using the viosupgrade -q vios1 command.Note: The
viosupgrade -altdisk
option is supported in VIOS version 2.2.6.30, or later. Hence, this option is not applicable to upgrades with VIOS versions earlier than 2.2.6.30.
- To avoid downtime during VIOS installations, you can
use the NIM altdisk method. This method preserves the current rootvg image and installs
the VIOS on a new disk by using the
- Using the viosupgrade command from VIOS – non-NIM environment:
- In a non-NIM environment, you can also use the viosupgrade command from the VIOS to upgrade the VIOS. For this method, you do not need a NIM master. The viosupgrade command must be run directly on the VIOS. This method uses the alt_disk_mksysb command to install VIOS version 3.1.0.00 on the provided disk.
- You can upgrade the VIOS to version 3.1.0.00, by
using the following
command:
viosupgrade -l -i <mksysb image> -a <hdisk>
For example:
viosupgrade -l -i vios3.1_mksysb -a hdisk1
- You can check the status of the VIOS installation by
using the viosupgrade -l -q command.Note: The
viosupgrade -altdisk
option is supported in VIOS version 2.2.6.30, or later. Hence, this option is not applicable for upgrades with VIOS versions earlier than 2.2.6.30.
- Manual Backup-Install-Restore:
For more information about this method, see Traditional method - manual.
- Traditional method - manual:
In the traditional method, you must back up the clusters by using the viosbr -backup -cluster command, save the backup in a remote location, install the VIOS by using the available VIOS version, copy the backup data back to the VIOS after the installation, and then restore the VIOS metadata by using the viosbr -restore command.
To back up the cluster-level VIOS metadata, complete the following steps:- Back up the cluster-level VIOS metadata, by using the
following command:
viosbr -backup -clustername <clusterName> -file <FileName>
Note: Save the file (FileName) in some location and place it on the VIOS after step 2 is complete to restore the VIOS metadata. - Install the VIOS image by using the
available installation methods, such as NIM installation. Note: If the VIOS is part of a cluster and the Shared Ethernet Adapter (SEA) is configured on the Ethernet interface used for cluster communication, you must restore the network configuration before you restore the cluster. To restore the network configuration before cluster restore, complete step 3. If you encounter any errors during step 3, you can use the -force flag to continue restoring the network configuration. If SEA is not configured on the Ethernet interface used for cluster communication , then directly go to step 4.
- Restore all the network configurations before restoring the cluster, by using the
following steps::
viosbr -restore -file <FileName> -type net
Note: The backup file needs to be copied to the VIOS before starting the restore process. Complete the following steps if there is no IP address configured on the VIOS to transfer the backup file. - Restore the cluster, by using the following
command:
viosbr -restore -clustername <clusterName> -file <FileName> -repopvs <list_of_disks> -curnode
- Back up the cluster-level VIOS metadata, by using the
following command:
Disruptive upgrades
In an SSP cluster environment, if you choose disruptive updates, the client logical partitions are extremely likely to go offline as the Logical Units (LUs) of the SSP will not be available as the cluster will be in the offline state during upgrades. To perform this type of upgrade, you must handle the upgrade process manually.