IBM Support

Upgrading to VIOS 3.1

How To


Lots of information to help you make the move to VIOS 3.1 quickly and accurately including hints and tips.


Nigels Banner

Helping every one to upgrade their VIOS in function, performance and reliability.


All Power servers running PowerVM and Virtual I/O Servers


If you don't know "VIOS" is short for Virtual I/O Server, then you are probably on the wrong page.


  • Latest versions are
    • VIOS 3.1.3
    • VIOS - The latest VIOS 2.2.6 version is always the recommended version to upgrade to get to 3.1
      This version is referred to in this article as
  • VIOS Supported Software that you are allowed to install and not invalidate your IBM VIOS Support are here
NEW NEWS - Sept 2019

Damien Ferrand (IT Consultant and VIOS administrator) points out

  • If upgrading from to, the VIOS viosupgrade command does not work.  Upgrade to (the current latest VIOS 3 release).  The problem is that the newer VIOS has some package versions higher than the older
  • My rule-of-thumb is: Always upgrade straight to the latest VIOS 3.1
  • Thanks to Damien for the excellent feedback

SSDPCM is removed during the migration to VIOS 3.1. It is a good opportunity to remove this old driver.

  • Prerequisite OS Levels
    • AIX 7.1 TL3
    • AIX 7.2 any TL
    • VIOS
  • Comment
    • The AIXPCM driver improvements include all significant functionality provided by SDDPCM.
    • The AIXPCM driver has the advantage over SDDPCM of not requiring specific maintenance.
    • All updates are applied as part of the Operating System Maintenance.
  • More information:
  • Latest Multipath Subsystem Device Driver User's Guide: 

Note: The term image-20230703114046-2is used in this article because using the "m" word by itself triggers the IBM "big brother" thought police alarms! 

VIOS 3.1 - New Functions

  • See Bob G Kovacs session on VIOS 3.1
  • Slide deck of 35 slides and Replay of 90 minutes
  • POWER/AIX Virtual User Group on 25 October 2018
  • The Big Item is the VIOS moves to AIX 7.2. TL3 then smaller features include:
    • The AIX update means the latest kernel and device drivers and so performance
    • iSCSI disks are support and presented to client LPARs as vSCSI disks
    • General clean up by removing unused packages for a smaller foot print and faster upgrades
    • Shared Storage Pool further RAS improvements like repository disk auto replace

Upgrading to VIOS 3.1

  • See Nigel Griffiths session on Upgrading the VIOS 3.1
  • Slide deck of 100 slides and Replay of 90+ minutes from YouTube
  • Power Virtual User Group on 21 November 2018

A summary of the key charts from the Upgrading to VIOS 3.1 Webinar is at the end of this article in the Questions and Answers section.

Upgrading strategies 1 - Don't Upgrade at All.

  1. New POWER9 server?
    • Start on VIOS 3.1 & set up as normal
  2. Zero risk upgrade plan
    1. Evacuate the Server with Live Partition Mobility (LPM) -> no LPARs on the Server = no risk
    2. Install a fresh VIOS 3.1 then set up as normal
    3. Then, use LPM all the LPARs back to the original server
  3. Shared Storage Pool (SSP)
    1. Use LPM on the SSP LPARs from the server
    2. Remove this servers VIOS from SSP (cluster -rmnode)
    3. Perform a fresh installation of VIOS 3.1 then set up the configuration as normal
    4. Add server back into the SSP (cluster -addnode)
    5. Next, use LPM to move the LPARs back 
    6. TESTED AND GOOD TO KNOW:  A newly installed VIOS 3.1 can join a Shared Storage Pool of VIOS running 2.2.6.LATEST

VIOS 3.1 Pre-req

  • POWER9, POWER8, and POWER7+ (known as the D models) all fully supported
  • POWER7, POWER5, and older servers are not supported - probably works fine, there is no code to block that working) but not supported nor tested by IBM

  • VIOS 2.2.5 - one more year of support
  • VIOS 2.2.4 - support ends Q4 2018 - update/upgrade "As soon as possible" and upgrade to VIOS 3.1 while you have one of you dual VIOS down
  • VIOS 2.2.3 or earlier - a dumb idea


  • “Update”  is a minor improvement 2.2.5 to 2.2.6
  • “Upgrade”  is a major improvement 2.2.6 to 3.1
  • VIOS 2.x.x.x based on AIX 6.1.  For example, VIOS 2.2.6.LATEST = AIX 6.1 TL5 sp9 + VIOS packages
  • VIOS 3.1 based on AIX 7.2 TL3 sp2 
  • An AIX 6.1 to AIX 7.2 upgrade requires a complete disk reformat and overwrite installation.
  • The AIX upgrade is similar to upgrading from VIOS 2 to VIOS 3

“Traditional” Upgrade - the high-level process

All VIOS upgrades (VIOS 2 to VIOS 3) involve 

  1. Back up the VIOS metadata and configuration 
    • Save the backup remotely
  2. Back up any non-VIOS application data files - save to a remote place
    • Are the LPAR disk files in the rootvg file system?
    • Warning easy to forget the vDVD Optical Library, any LPAR virtual disks based on rootvg LV or file-backed
  3. Install from VIOS 3.1 mksysb? This operation rebuilds the rootvg and file systems
    • Possibly on an alternative disk
  4. Get the VIOS backup & restore the metadata and configuration
  5. Reinstall any non-VIOS application code and data 
  6. If the VIOS is using the SSP, then extra steps are required 

Note: various VIOS or NIM commands automate some parts of the process.

NIM and new upgrade commands

  • Preparation: NIM server must be at AIX 7.2 TL3 sp2+
  • Two New commands to help us upgrade
  • The command viosupgrade. There’s two commands with the same name!
    • On NIM server (when you update to AIX 7.2 TL3 sp1+)
    • On VIOS (when you update to VIOS
    • Each has a different syntax.
Not the same as the updateios command – it’s for minor updates!

IBM Docs Manual pages

  1. Top Level of the IBM Docs page for VIOS migration:
  2. Vague Introduction to the installations methods
  3. Non-SSP (Shared Storage Pool)
  4. SSP (Shared Storage Pool)
  5. Migration levels
  6. Miscellaneous
  7. Unsupported
  8. What's new in Virtual I/O Server commands

  9. Virtual I/O Server release notes – including installation from USB Memory device or Flash device key
    • USB Memory key or Flash key installation
    • Duff minimum size for a VIOS
  10. VIOS viosupgrade command in VIOS 2.2.6.LATEST 
  11. NIM viosupgrade command on the NIM AIX 7.2 TL3 + sp 

iSCSI starter pack

Download the VIOS 3.1 installation software

Preparing for NIM

  • Getting hold of the mksysb file is a pain!
    • You need to extract the mksysb image from the .iso
    • Just to make life interesting it is in 2 parts one on each of the DVDs
    • Use the cat command to join them together
    • End up with then have to manually add the service pack
  • Flash image is already at level (note level includes the first service pack)
    • I could not extract the mksysb by loopmounting the .iso on AIX
    • Errors:  “can’t open the readable file!”
    • But I could extract on a Linux loop-mount (Ubuntu 18.04 on Power)
    • UPDATE: The flash .iso is udfs format (for USB memory key or thumb drive) and not the older CD/DVD cdrfs format.
      Thanks to Marc-Eric Kahle/Germany/IBM and FrankKruse
      New note: this udfs option is a new-ish AIX feature my AIX 7.1 TL2 can't do mount udfs format but my AIX 7.2 TL3 can
      •   loopmount -i VIOS_31010_Flash.iso -m /mnt -o "-V udfs -o ro"

Script to extract the one Flash DVD .iso mksysb's (at your own risk = don't blindly run this command)

  # On AIX 7.2 TL3 or above (definitely fails on AIX 7.1 TL2)  # Note: udfs for a Flash image  loopmount -i flash.iso -o "-V udfs -o ro" -m /mnt  DIR=/usr/sys/inst.images  ls –l /mnt/$DIR/mksysb_image # display the file  cp    /mnt/$DIR/mksysb_image VIOS31_mksysb_image

Script to extract the two DVDs .iso to make a single mksysb (at your own risk = don't blindly run this command)

  # Assuming the DVDs are in this directory and with these names   DVD1=/tmp/dvdimage.v1.iso   DVD2=/tmp/dvdimage.v2.iso      mkdir /mnt1   mkdir /mnt2    df  # to show they got mounted OK     # Watch out for those double quotes and “-” characters. Note: cdrfs for a DVD   loopmount -i $DVD1 -o "-V cdrfs -o ro" -m /mnt1   loopmount -i $DVD2 -o "-V cdrfs -o ro" -m /mnt2     DIR=/usr/sys/inst.images   ls –l /mnt1/$DIR/mksysb*  /mnt2/$DIR/mksysb* # display the files   cat   /mnt1/$DIR/mksysb*  /mnt2/$DIR/mksysb*  >VIOS31_mksysb_image     umount /mnt1   umount /mnt2

Pathways to VIOS 3.1

pathway nonSSP

pathway SSP
pathway Nigels

VIOS viosupgrade command

  • Read the manual page on IBM Docs
  • Mandatory: an unused disk and VIOS 2.2.6.LATEST
  $ viosupgrade -l -i image_file -a mksysb_install_disk [ -c ] [-g filename ]

First parameter is a lowercase L for “local”
For humans:

  • Non-SSP:
      viosupgrade -l -i /tmp/vios31.mksysb -a hdisk33 -g /tmp/filelist
  • With SSP:
  viosupgrade -l -i /tmp/vios31.mksysb -a hdisk33 -g /tmp/filelist -c
  • -g file of file names that get copied for you to the new disk = neat!
  • -c = cluster = Shared Storage Pool
  • List the state of the upgrade with:
      viosupgrade -l -q

For altinst_rootvg "already exists" errors

These commands are useful:

  •   # alt_rootvg_op -X altinst_rootvg  Bootlist is set to the boot disk: hdisk0 blv=hd5


  •   $ exportvg altinst_rootvg  $ importvg -vg rootvgcopy hdisk1

For disk busy of in use errors

Be careful - it might be true!

These commands are useful.

List all the disks and repository in use by a Shared Storage Pool.

Do not fiddle or use these disks.

  $ lscluster -d

List the disks that are definitely not in use.

  $ lspv -free

ODM could be tracking their previous use or the disk header is indicating its use
If you have POWER9 NVME, try

  $ lspv -unused

Options are (as the root user) zap the front of the disk (safer than dd command).

  # cleandisk -?

Read and you decide how to screw up your disks!

Also, worth knowing about 

  # chpv –C hdisk3


  • -C option:  Clears the owning volume manager from a disk. This flag is only valid for the root user. This command could fail to clear LVM as the owning volume manager if the disk is part of an imported LVM volume group.

Regression Test – for the virtual networks & virtual disks

This section generates the details that you need to check.

The initial list is the

  1. Virtual disks:
       lsmap -all
  2. Virtual networks:
       lsdev | grep SEA ; lsdev -attr -dev <your-SEA's>
  3. The VIOS it is on the network and IP settings: as root
      ifconfig -a
  4. Useful scripts that system admin team use (like my 'n tools' for SSP,
  5. Set up the automatic viosbr back-up, sending the backups to a remote server,
  6. Note the padmin users and password.

Get the important files moved over for you

VIOS viosupgrade command has a nice -g filename option.  This file contains full file names of files that you would like it to copy to the alternative disk once installed with VIOS 3.1.  This section saves time later.

The files get copied to /home/padmin/backup_files/<then the full file name you requested>

Here are my suggestions

  1. /etc/motd
  2. /etc/environment
  3. /etc/netsvc.conf
  4. /etc/resolv.conf
  5. /etc/hosts
  6. /etc/inittab
  7. /etc/ntp.conf  
Some "daft people" add empty lines or comments - don't go there!
If the VIOS is using LDAP for security, save the LDAP configuration files.
If the VIOS is using local user and password controls:
  1. /etc/group
  2. /etc/passwd
  3. /etc/security/passwd

Output of the following commands:

For padmin user and for the root user:

   crontab -l 
  lsmap -all  lsdev   lsdev -dev
  for i in $(lsdev -dev hdisk* -field name)     # disks and attributes  do      echo ==========  $i      lsdev -dev $i -attr  done    for i in $(lsdev -dev en* -field name)    # networks and attributes  do      echo ==========  $i      lsdev -dev $i -attr  done

If you have Nigel's SSP tools:

  1. /home/padmin/ncluster
  2. /home/padmin/nlu
  3. /home/padmin/nmap
  4. /home/padmin/npool
  5. /home/padmin/nslim

Save any nmon or topas output files to compare later for a performance check

  tar cvf /tmp/var_perf_daily.tar  /var/perf/daily/*

Double check for any local useful script in:

  • /home/padmin
  • /usr/lbin

Not for use but for reference:

  • /etc/tunables/nextboot
  • /etc/tunables/lastboot

With an SSP

Make sure you have a running SSP cluster to join back too after you VIOS viosupgrade each VIOS.

NIM viosupgrade 

Check the manual page in IBM Docs or on the NIM server at AIX 7.2 TL3 +
  To perform the bosinst type of upgrade operation, use the following syntax:    
viosupgrade -t bosinst -n hostname -m ios_mksysbname         -p spotname {-a RootVGCloneddisk: ... | 
-r RootVGInstallDisk: ...| -s}         [-b BackupFileResource] [-c] [-e resources: ...] [-v]    

To perform the altdisk type of upgrade operation type the following command:           
viosupgrade -t altdisk -n hostname -m ios_mksysbname         -a rootvgcloneddisk [-b BackupFileResource] [-c]  
[-e resources: ...] [-v]  KEY  -t bosinst (overwrited the rootvg)   
-t altdisk (make rootvg on a new alternative disk and   leaves the rootvg untouched and called old_rootvg)  
-n redvios2   -m VIOS31_mksysb  -p VIOS31_spot   {-a RootVGCloneddisk: ... | -r RootVGInstallDisk: ...| -s}         
[-b BackupFileResource] [-c] [-e resources: ...] [-v]    -c  

This is a SSP VIOS  -v   Validate the install parameters and VIOS state

On the NIM server

   viosupgrade -v  

Key: v = validate

Nice feature to confirm that you have everything set up right before the upgrade.

It makes a dozen checks or so and the details are saved in a log file.

  # viosupgrade -v -t altdisk -n redvios2  -m vios31010_mksysb -a hdisk3 -c  Welcome to viosupgrade tool.  Triggered validation..  Check log files for more information,  Log file for 'redvios2' is: '/var/adm/ras/ioslogs/redvios2_10289506_Fri_Nov_16_10:29:13_2018.log'.  Please wait for completion..  -----------------------------------  Validation successful for VIO Servers:  redvios2  -----------------------------------      # viosupgrade -v -t bosinst  -n redvios2  -m vios31010_mksysb  -p vios31010_spot  -a hdisk3 -c  Welcome to viosupgrade tool.  Triggered validation..  Check log files for more information,  Log file for 'redvios2' is: '/var/adm/ras/ioslogs/redvios2_10289514_Fri_Nov_16_10:41:06_2018.log'.  Please wait for completion..  -----------------------------------  Validation successful for VIO Servers:  redvios2  -----------------------------------  #

NIM Resources Required for the VIOS

On a regular NIM install of a new VIOS, you create a NIM machine for the VIOS. That does not work for a VIOS upgrade.  

wrong old NIM way

You need to define a Managed Resources:

  • HMC,
  • CEC (server) and
  • The VIOS as a VIOS (not a NIM machine resource)

UPDATE: Comment on the following statement: "No idea what a Password file is?

  • Well I have a friend that does called Marc-Eric Kahle in Germany
  • This Password file is a NIM feature to allow remotely getting a console from the NIM server.  Not tried this method myself yet.

  • You need to install DSM.core on the image-20230703114141-4 and run:

      # dpasswd -f /export/nim/passwd/7063CR1hmc_passwd -U hscroot -P abc1234  Password file is /export/nim/passwd/7063CR1hmc_passwd  Password file created.
  • Download dsm.core?
    See this link for details: 


nim 3

nim 4

To get a VIOS adopted but a NIM server, on the VIOS as padmin run

  •   remote_management [ -interface Interface ] NIMmaster  remote_management -disable
  • To enable remote_management by the image-20230703114128-3 (here called nim32), type:
  •   $ remote_management -interface en0 nim32  nimsh:2:wait:/usr/bin/startsrc -g nimclient >/dev/console 2>&1  0513-059 The nimsh Subsystem has been started. Subsystem PID is 11337892.
  • To disable remote_management, type:
  •   remote_management -disable

The NIM viosupgrade command can find the name VIOS as a NIM resource and not a NIM machine.

Log files - just in case

  • NIM viosupgrade details the log file that it is using
  •   viosupgrade -l -q 
  • This command outputs the current or final state
  • Log files for debug purpose:
    • On NIM server:  viosupgrade command from NIM
      • viosupgrade Command logs: /var/adm/ras/ioslogs/*
      • NIM Command Logs:  /var/adm/ras/nim*
    • On VIOS: viosupgrade
      • viosupgrade Command logs: /var/adm/ras/ioslogs/*
      • viosupgrade Restore logs:  /home/ios/logs/viosup_restore.log
      • viosupgrade Restore logs:  /home/ios/logs/viosupg_status.log
      • viosbr Back up Logs: /home/ios/logs/backup_trace*
      • viosbr Restore Logs/home/ios/logs/restore_trace*

Fifth method: Disruptive VIOS 3.1 upgrade with SSP 

The alternative Whole SSP down method Highly Disruptive! Not for Production

  • Upgrade to VIOS 2.2.6.<latest-version>
  • On each VIOS: viosbr -backup - Save the files off the VIOS
  • Stop ALL LPARs
  • Stop ALL VIOS in the entire pool
  • Then, for each SSP VIOS in turn
    • Upgrade VIOS with complete overwrite
    • viosbr -recover  -> including SSP backup

Nigel’s Ultra blunt opinion about Production Upgrades

  • In my humble opinion:
    • Fresh installation VIOS 3.1 for new servers – Do not upgrade your Production to VIOS 3.1 until mid Q1 2019 to allow any bug + fixes to arrive.
  • Why?
    • Messing up your Production VIOS & its LPARs is painful
  • In the meantime run tests on:
    • The upgrade process
    • Prepare to save and later reinstall your non-VIOS applications & data
    • New features of VIOS 3.1 – in particular the new iSCSI features (if you like iSCSI)
  • I am sure once at VIOS 3.1 that it all works fine

VIOS Tuning Options

  • Any AIX 6.1 tuning option disappears during the installation and upgrade
    • So all the old mistakes are wiped out = a good thing!
  • AIX development decided a reset the tuning defaults in AIX 7.1 & 7.2 for best performance
    • AIX 7.2 tuning options are different and more of them 
    • AIX6.1 tuning does NOT apply to AIX 7.2
    • Do not apply your old VIOS 2 tune-up script on VIOS 3
  • You may:
    • Monitor performance for a week to double check
    • Run the VIOS advisor: “part command” to see what it suggests
    • If you have a problem: Raise a PMR - have a VIOS snap and perfPMR ready
    • Don’t start randomly adding AIX 6 tuning 

Summary: VIOS 3.1 Upgrade in practice – in general

  • New Features: AIX 7.2, RAS, performance + iSCSI LUN to vSCSI 
    • Nice to have but not massive functional differences 2.2.6.<latest-version> to 3.1
    • Everything works the same as before
  • Clean slate: fresh overwrite installation VIOS flushes out the “crufty”
  • It is your job to handle the extra applications (code & data)
  • You run Dual VIOS for RAS & to make upgrades simple
  • Don’t upgrade at peak loads
  • Think about a simple regression test before you start. 
  •  "Backup, backup, and backup"
    • At least two of the following methods: disk clone, mksysb of the whole VIOS, viosbr, alternative disk installation
  • Upgrading any production server - always practice before you start.

Summary: VIOS 3.1 Upgrade in practice – on the day

  1. Don’t do it – fresh installation a new server or evacuate old servers first
  2. Regardless of the method: viosbr -backup & save the file off the VIOS
  3. Always install to an alternate disk
  4. Don’t forget the non-VIOS apps (code and data) and rootvg virtual disk or virtual DVD
  5. Preferred methods [[in Nigel’s opinion]]
    1. For SSP users, always use method b)
    2. updateios To version 2.2.6.<latest-version>, then NFS the VIOS 3.1 mksysb to the target VIOS & run VIOS viosupdate
    3. Traditional Manual still a good method: back up, then scratch installation of the VIOS by using DVD, USB flash drive, HMC, NIM, and recover metadata 
    4. NIM viosupdate -bosinst (for SSP get to 2.2.6.<latest-version> first)
    5. NIM viosupdate -altdisk - Note: not tested successfully by Nigel yet (out of time)
  6. Run the regression test commands & compare with the saved output

Post upgrade Checks - NEW SECTION

On the VIOS - Blue parts are new and particularly for SSP users.

  1. Check the VIOS time zone with cfgassist set it and you have to reboot.
  2. Check the VIOS date - set it with
  3. Set up any network time protocol server (ntp) service.
  4. Set up /etc/netsvc.conf especially.
  5. When using SSP,  a missing DNS can stop SSP communication, to search locally for the hostname before DNS for example:
    •   hosts = local, bind
  6. Set up /etc/hosts, when SSP, include all the VIOS members of the SSP.
  7. Check the new VIOS level: 
  8. Check the paging space size and layout: (as the root user)
      lsps -a
  9. Check file system sizes - some like /tmp and /home can be back to the default size:
       df -g
  10. Check  users: (as the root user)
      lsuser ALL
  11. Your original Virtual Optical Library is gone - you need to re-create it and get the .iso images restored
  12. Check the tuning options: (as the root user):
    •   ioo -L  lvm -L  nfso -L  no -L  raso -L  schedo -L  vmo -L

On your VIOS client LPARs (virtual machines):

  1. Check your disk paths:
    • Important to do this BEFORE upgrading your second VIOS of a pair
  2. If not enabled, enable your paths as the root user
      lspath | awk '{print "chpath -s enable -l " $2 " -p " $3 }' | ksh

New section:  Hard-won experience

Disk Renumbering

  • The hdisk order in AIX and VIOS is determined by the order of discovery at the initial installation time and later added disk get higher hdisk numbers
  • VIOS 3.1 is a fresh installation
  • So if you added disks after the initial VIOS 2 installation time - expect the hdisks to be reordered during the upgrade
  • The viosupgrade command knows this and deals with the reorder issue but you could have a shock like you asked the command to use alternative disk hdisk38 but it becomes hdisk6
  • If you (for example) run viosupgrade -l -a hdisk2 . . . 
  • Once you boot the VIOS 3.1, the old hdisk2 could be still hdisk 2, or maybe a different hdisk number - this behavior is expected.
  • Unless you have a simple low number of internal disks from the original install and no update history - expect a different hdisk number.

VIOS mirror break, upgrade, and remirror

  • Many of my VIOS 2 had rootvg mirrored. Breaking the mirror, gives you an alternative disk to install on too
    • VIOS unmirrorios hdiskX
    • Note the named hdiskX is the disk, which remains in use
    • Removed the empty disk from rootvg (cfgassist) and then upgraded to VIOS 3.1 naming the unused disk as the alternative disk for the VIOS 3.1 installation
  • When happy that VIOS 3.1 is working OK, remirror by using the mirrorios command
  • I have many problems as I get "that disk is in use"
  • I tried the cleandisk, chpv -C hdiskX, create a VG on it and remove the VG and so on, but it is not working - create a PMR with IBM Support

VIOS with SSP pairs

  • If you have one VIOS running SSP of a pair currently being upgraded and then run the VIOS viosupgrade -l . . .on the second VIOS, I get 
    "Cluster state in not correct"   (including the typographical error).
  • The translation is: You can't seriously want both VIO Servers of this pair to be down at the same time!
  • It is a good check but if you evacuated the server to upgrade the VIOSs, it means you can't do both at the same time.
  • As the viosupgrade take about 10 to 15 minutes on fast disks (SSD or NVMe) - it is not too bad.  If the first upgrade fails for some reason, at least you get to fix it and learn before you fail the 2nd VIOS.
  • Wait until the first VIOS is upgraded, rebooted, and operating in the SSP cluster again.

When you get to VIOS 3.1, I fully expect it to be Rock Solid as normal for new VIOS releases

Questions and Answers - During the Upgrading to VIOS 3.1 Webinar

  • This section: not officially reviewed.
  1. What about application certifications? Is VIOS 3.1 certified by SAP HANA?
    • Ask SAP or the application vendor about their certification.
    • It is not IBM's responsibility and IBM does not make statements on their behalf.
  2. Is it still possible to back up VIOS 3.1 using backupios and mksysb and then restore  with the NIM (I assume NIM on AIX 7.2 is required)?
    • Yes.
  3. On VIOS 2.2, because it is based on AIX 6.1 it uses SMT-4 (if available on your hardware). Is it beneficial for VIOS 3.1 to enable SMT 4 or even 8 on POWER9?
    • Yes.
    • The SMT number is set automatically. There is no need to fiddle.
    • With VIOS 3.1+ on POWER9, it uses SMT=8.
  4. Why was IVM removed?
    • There is little remaining use of IVM with clients. 
    • Sure there are some users that liked IVM (for example in the POWER5 days like the p505 1U server).
    • IBM can't justify the development and testing costs. 
    • Some people think IBM is a bottomless pit of money and people time - it is not true.
    • We understand IVM users, love it and it is fast.
    • Upgrade to the latest VIOS 2.2.6.<latest-version>. Wait 3 years (with support) and then power off the server.  It is probably 8 years old by then and overdue for retirement and running unsupported everything.
  5. Do I need any special license to use VIOS 3.1 on servers where I am currently using VIOS 2.x?
    • Your PowerVM license covers the Hypervisor and VIOS and is not version-specific, so you are entitled.
  6. Is VIOS IFL compatible for POWER 980?
    • VIOS is AIX so it does not use IFLs.
    • As far as I know, you can have a 100% IFL enabled server and run Linux LPARs as well as VIOS LPARs on it.
    • The VIOS, though based on AIX, are not considered AIX LPARs for IFL usage. The VIOSs are in their separate own category.
  7. I have dual VIOS server (lets say A and B). I want to avoid upgrading (as it is problematic) but I have no LPM nor possibility to create move VIOS partition (due to hardware). I would like to do a fresh installation of VIOS 3.1 on VIOS B. Is it possible to set up SEA failover/NPIV/maybe SSP as on the previous installation so there would be mixed VIOS versions (A - 2.X and B - 3.1) for some time for testing purposes? Is it possible from support and technical point of view?
    • Yes, that all works fine.
  8. Does VIOS 3.1 support vNIC?
    • Yes.
  9. Is LDAP supported on VIOS?
    • Yes and not changed with VIOS 3.1
    • You have to save the LDAP configuration and recover that yourself but that is like what you do for each new server and new pair of VIOSs.
  10. The "flash" version has the full mksysb?
    • Correct
  11. If installing from an HMC, can you use the "flash" .iso?
    • Yes, you can use the flash .iso using an HMC
    • I can't think of a reason why that would not work. BTW - if you plug a USB device into an HMC, you can use the lsmediadev CLI command to see where it is mounted.
  12. Is it possible to do "alternative disk" installation on and more disk; so I can quickly boot back to 2.2.x?
    • Correct. Done by hand or by using any of the three viosupgrade options.
    • Don't get confused by the NIM viosupgrade -t altdisk  = that -t option name is confusing.
  13. Before upgrading the VIOS to 3.1, is there any compatibility check we need to do with respect for the SAN?
    • VIOS 2.2 and VIOS 3.1 uses the SAN in the same way.
  14. Does this alternative disk installation use a caching disk on the NIM server or NFS - or do we have the choice
    • We are not familiar with the "caching disk" method. 
    • We use the standard NIM method of pushing an mksysb and spot to the client LPAR.
  15. If I choose NIM viosupgrade -bosint mode, is "Rollback" possible?
    • Yes, you have an alternative disk fallback.  Use the viosupgrade -bosint with the -a hdiskX option saves the original VIOS disk as a fallback option.
  16. Apparently, viosupgrade command is in 2.2.6.<latest-version>and running it upgrades the VIOS to 3.1 and AIX to 7.2. So somehow those ~5G of mksysb for fresh installation is in fact inside 2.2.6.<latest-version>itself?  Is my question clear?
    • Your question is as clear as mud.  VIOS 3.1 installation image downloaded from ESS includes AIX 7.2. Just like VIOS 2.2 installation image includes AIX 6.1.
    • In VIOS 2.2.6.<latest-version>, you have the viosupgrade command binary but no VIOS 3.1 installation image.
    • You have to download the VIOS 3.1 .iso image, extract the mksysb and hand the mksysb file to the viosupgrade command.
  17. Based on the list of files Nigel's example copied with the -g, I'm expecting that NTP configuration wouldn't be preserved by default either?
    • Correct.  It is exactly like overwriting the rootvg and recovered the VIOS metadata - that is exactly what it did.  Everyone needs to have clear documentation on the additional setup they use . . . because you have to do that for every new POWER server.
  18. I read in a doc that after upgrade to VIOS 3.1 through "viosupgrade" tool, disk attributes like "reserve_policy" will be set to "no_reserve" and "queue_depth" will be set to "1". If "viosupgrade" tool reboots VIOS at this situation, disk SCSI reservations affects client LPARs and client LPAR ends up in a hung state. How to overcome this situation?
    • UPDATE: We checked - the viosbr backup of the metadata seems to include these settings.
    • The first boot does not have the settings. Later the backup is restored - including the settings
    • The Second boot means that the settings are active.
  19. So VIOS IP on VLAN is OK, even if VLAN is on the SEA?
    • We are not encouraging IP addresses on the SEA.
    • VIOS best practice is to have 2 virtual Ethernet adapters per Virtual network in the VIOS.
    • One is used to create the SEA (bridge)
    • The other one has the IP address on it (for access).
    • This configuration increases performance.
  20. As for the iSCSI on VIOS - wouldn't it be better to use iSCSI LUNs on LPARs directly (AIX/Linux) rather than iSCSI-VIOS-vscsi-Client?
    vSCSI would introduce more code layers and could impact performance IMHO.  It is like vSCSI vs NPIV.
    • iSCSI on the VIOS makes the VIOS client a regular generic simpler vSCSI client with no iSCSI hardware requirements.
    • IMHO vSCSI has minimal impact.
    • You sound like an iSCSI guru - we need to learn more about iSCSI ourselves.
  21. Is there any way to install automatically any 3rd party MPIO during viosupgrade
    • Nope.  I guess NIM guru's with a NIM viosupgrade -  could supply a script to do that.
    • The trend for years is moving to AIX MPIO.
  22. Do we need to download and extracted anything (other than the VIOS 2.2.6.<latest-version>) to run the viosupgrade?
    • The VIOS viosupgrade command arrives in the VIOS 2.2.6.<latest-version> update.
    • There is nothing else to download.
  23. How to get from the VIOS installation ISO to mksysb 3.1
    • See the section "Script to extract the two DVDs .iso to make a single mksysb" - which includes extracting it from the Flash VIOS installation images
  24. Password for padmin user is reset when upgrade? And reset when restored?
    • The padmin password is not in the VIOS metadata backup.
    • I just set it on the first login prompt to the previously used password.
  25. Is the VIOS 3.1 default SMT8?
    • Can't imagine why you need to know.
    • It is set by the rules the rules for VIOS 3.1.  It is set to SMT=8 on my POWER9 server.
    • Older POWER hardware that does not support SMT=8 is set depending on the hardware.
  26. What are the IVM replacement options?
    • HMC or a virtual HMC.
  27. Is there is any compatibility with PowerVC?
    • PowerVC communicates with the VIOS using the HMC - all current HMC versions are supports with PowerVC and VIOS.
  28. Is the IBM Tivoli Monitoring config preserved?
    • The viosbr backup command does not include IBM Tivoli Monitoring configuration or data
    • Save that data and restore it yourself. This is the same as installation of a new server and new pair of VIOS.
    • Or if you are not worried about missing data, follow your fresh installation of VIOS procedure.
  29. Maybe the command rulescfgset set rules after installation to 3.1
    • Yes it sure does - just like on VIOS 2.2
  30. Is this mksysb image created from latest VIOS3.1 image
    • Yes it is extracted from the VIOS 3.1 installation .iso image
  31. Does it have any HMC minimal version?
    • Nope.
    • The HMC is dependent much more on the Server system firmware.
  32. Is there a system firmware minimum level requirements before upgrading to VIOS 3.1?
    • Nope but always use the latest firmware always or n-1 version if you conservative.

Additional Information

Other places to find content from Nigel Griffiths IBM (retired)

Document Location


[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG10","label":"AIX"},"Component":"","Platform":[{"code":"PF002","label":"AIX"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}},{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"HW1W1","label":"Power -\u003EPowerLinux"},"Component":"","Platform":[{"code":"PF016","label":"Linux"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"","label":""}},{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SWG60","label":"IBM i"},"Component":"","Platform":[{"code":"PF012","label":"IBM i"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB57","label":"Power"}}]

Document Information

Modified date:
07 July 2023