IBM Support

Upgrading to VIOS 3.1

How To


Lots of information to hep you make the move to VIOS 3.1 quickly and accurately including hits and tips.


Nigels Banner

Helping every one to upgrade their VIOS in function, performance and reliability.


All Power servers running PowerVM and Virtual I/O Servers.


If you don't know VIOS is short for Virtual I/O Server then you are probably on the wrong page :-)


  • Latest versions are
    • VIOS
    • VIOS
    • VIOS - The latest VIOS 2.2.6 version is ALWAYS the RECOMMENDED version to upgrade to get to 3.1. 
      This is referred to below as 2.2.6.LATEST.
  • VIOS Supported Software that you are allowed to install and not invalidate your IBM VIOS Support are here
    • Web page like: VIOS Supported Applications (link fixed)
    • Now copied to the AIXpert Blog as it was near impossible to find and still on the Developer Works pages which are over due for removing.
NEW NEWS - Sept 2019

Damien Ferrand (IT Consultant and VIOS administrator) points out

  • When using the VIOS viosupgrade command, upgrading from to does not work.  You have to upgrade to (the current latest VIOS 3 release).  The problem is the newer has some package versions higher than the older
  • Rule of Thumb: Always upgrade straight to the very latest VIOS 3.1
  • Thanks to Damien for the excellent feedback

SSDPCM removal while moving to VIOS 3.1 would be a good opportunity to remove this old driver

  • Prerequisite OS Levels
    • AIX 7.1 TL3
    • AIX 7.2 any TL
    • VIOS
  • Comment
    • The AIXPCM has evolved to include all significant functionality provided by SDDPCM.
    • The AIXPCM has the advantage over SDDPCM of not requiring specific maintenance.
    • All updates are applied as part of the Operating System Maintenance.
  • More info:
  • Latest Multipath Subsystem Device Driver User's Guide: 

VIOS 3.1 - New Functions

  • See Bob G Kovacs session on VIOS 3.1
  • Slide deck of 35 slides and Replay of 90 minutes
  • POWER/AIX Virtual User Group on October 25th 2018
  • The Big Item is the VIOS moves to AIX 7.2. TL3 then smaller features include:
    • The AIX update means the latest kernel and device drivers and so performance
    • iSCSI support via vSCSI
    • General clean up by removing unused packages for a smaller foot print and faster upgrades
    • Shared Storage Pool further RAS improvements like repository disk auto replace

Upgrading to VIOS 3.1

  • See Nigel Griffiths session on Upgrading the VIOS 3.1
  • Slide deck of 100 slides and Replay of 90+ minutes from YouTube
  • Power Virtual User Group on November 21st 2018

Below is a Summary of the key charts from the Upgrading to VIOS 3.1 Webinar and at the bottom the Questions and Answer from that session

Upgrading strategies 1 - Just don't Upgrade at All

  1. New POWER9 server?
    • Start on VIOS 3.1 & set up as normal
  2. Zero risk upgrade plan
    1. Evacuate the Server with Live Partition Mobility (LPM) -> no LPARs on the Server = no risk
    2. Install a fresh VIOS 3.1 then set up as normal
    3. LPM all the LPARs back
  3. Shared Storage Pool (SSP)
    1. LPM SSP LPARs from the server
    2. Remove this servers VIOS from SSP (cluster -rmnode)
    3. Then fresh install VIOS 3.1 then set up as normal
    4. Add server back into the SSP (cluster -addnode)
    5. Then LPM LPARs back 
    6. TESTED AND GOOD TO KNOW:  A newly installed VIOS 3.1 can join a Shared Storage Pool of VIOS running 2.2.6.LATEST

VIOS 3.1 Pre-req

  • POWER9, POWER8 and POWER7+ (known as the D models) all fully supported
  • POWER7, POWER5 and older servers are not supported - probably works fine, there is no code to block that working) but not supported nor tested by IBM

  • VIOS 2.2.5 - one more year of support
  • VIOS 2.2.4 - support ends Q4 2018 - please update/upgrade "As soon as possible" and you might as well go to VIOS 3.1 while you have one of you dual VIOS down
  • VIOS 2.2.3 or earlier - a very dumb idea


  • “Update”  is a minor  improvement 2.2.5 to 2.2.6
  • “Upgrade”  is a major improvement 2.2.6 to 3.1
  • VIOS 2.x.x.x based on AIX 6.1.  For example: VIOS 2.2.6.LATEST = AIX 6.1 TL5 sp9 + VIOS packages
  • VIOS 3.1 based on AIX 7.2 TL3 sp2 
  • We all know: AIX 6.1 to AIX 7.2 upgrade means a complete disk reformat and overwrite install, it’s the same for VIOS 2 to 3

“Traditional” Upgrade - the high-level process

All VIOS upgrades (VIOS 2 to VIOS 3) involve 

  1. Backup the VIOS metadata and configuration 
    • Save the backup remotely
  2. Backup any non-VIOS application data - save to a remote place
    • That are in the rootvg
    • Warning easy to forget the vDVD Optical Library, any LPAR virtual disks based on rootvg LV or file-backed
  3. Install from VIOS 3.1 mksysb? This rebuilds the rootvg + file systems
    • Possibly on an alternative disk
  4. Get the VIOS backup & restore the metadata / configuration
  5. Reinstall any non-VIOS application code and data 
  6. If the VIOS is using the SSP additional steps are required 

Note: various VIOS or NIM commands will automate some parts

NIM and new upgrade commands

  • Preparation: NIM server must be at  AIX 7.2 TL3 sp2+
  • Two New commands to help us upgrade
  • viosupgrade command – there’s two of them!
    • On NIM server  (when you update to AIX 7.2 TL3 sp1+)
    • On VIOS (when you update to VIOS
    • Each has a different syntax – doh!
Not the same as the updateios command – it’s for minor updates!

Manual pages

  1. Top Level IBM KnowledgeCenter page
  2. Vague Introduction to methods
  3. Non-SSP (Shared Storage Pool)
  4. SSP (Shared Storage Pool)
  5. Misleading levels
  6. Miscellaneous
  7. Unsupported
  8. What's new in Virtual I/O Server commands

  9. Virtual I/O Server release notes – including USB Memory or Flash key install
    • USB Memory key or Flash key install
    • Duff minimum size for a VIOS
  10. VIOS viosupgrade command in VIOS 2.2.6.LATEST
  11. NIM viosupgrade command on the NIM AIX 7.2 TL3 + sp
    • That one is hard to find – it is in the AIX commands reference for AIX Commands of AIX 7.2

iSCSI starter pack

Download the VIOS 3.1 installation software

Preparing for NIM

  • If using NIM then you know getting hold of the mksysb file is a pain!
    • You need to extract the mksysb image from the .iso
    • Just to make life interesting it is in 2 parts one on each of the DVD’s
    • Then “cat” them together
    • End up with then have to manually add the service pack
  • Flash image is already at level (includes .10 = first sp)
    • I could not extract the mksysb by loopmounting the .iso on AIX
    • Errors:  “can’t open the readable file!”
    • But I could extract on a Linux loop-mount (Ubuntu 18.04 on Power)
    • UPDATE: The flash .iso is udfs format (for USB memory key/thumb drive) and not the older CD/DVD cdrfs format.
      Thanks to Marc-Eric Kahle/Germany/IBM and FrankKruse
      New note: this udfs option is a new-ish AIX feature my AIX 7.1 TL2 can't do mount udfs format but my AIX 7.2 TL3 can
      •   loopmount -i VIOS_31010_Flash.iso -m /mnt -o "-V udfs -o ro"

Script to extract the one Flash DVD .iso mksysb's (at your own risk = don't blindly run this command)

  # On AIX 7.2 TL3 or above (definitely fails on AIX 7.1 TL2)  # Note: udfs for a Flash image  loopmount -i flash.iso -o "-V udfs -o ro" -m /mnt  DIR=/usr/sys/inst.images  ls –l /mnt/$DIR/mksysb_image # display the file  cp    /mnt/$DIR/mksysb_image VIOS31_mksysb_image

Script to extract the two DVD .iso to make a single mksysb (at your own risk = don't blindly run this command)

  # Assuming the DVDs are in this directory and with these names   DVD1=/tmp/dvdimage.v1.iso   DVD2=/tmp/dvdimage.v2.iso      mkdir /mnt1   mkdir /mnt2    df  # to show they got mounted OK     # Watch out for those double quotes and “-” characters. Note: cdrfs for a DVD   loopmount -i $DVD1 -o "-V cdrfs -o ro" -m /mnt1   loopmount -i $DVD2 -o "-V cdrfs -o ro" -m /mnt2     DIR=/usr/sys/inst.images   ls –l /mnt1/$DIR/mksysb*  /mnt2/$DIR/mksysb* # display the files   cat   /mnt1/$DIR/mksysb*  /mnt2/$DIR/mksysb*  >VIOS31_mksysb_image     umount /mnt1   umount /mnt2

Pathways to VIOS 3.1

pathway nonSSP

pathway SSP
pathway Nigels

VIOS viosupgrade command

  • Read the manual page on IBM KnowledgeCenter
  • Mandatory: an unused disk and VIOS 2.2.6.LATEST
  $ viosupgrade -l -i image_file -a mksysb_install_disk [ -c ] [-g filename ]

First parameter is a lowercase L for “local”
For humans:

  • Non-SSP:
      viosupgrade -l -i /tmp/vios31.mksysb -a hdisk33 -g /tmp/filelist
  • With SSP:
  viosupgrade -l -i /tmp/vios31.mksysb -a hdisk33 -g /tmp/filelist -c
  • -g file of file names that get copied for you to the new disk = neat!
  • -c = cluster = Shared Storage Pool
  • List the state of the upgrade with:
      viosupgrade -l -q

For altinst_rootvg "already exists" errors

These will be useful

  •   # alt_rootvg_op -X altinst_rootvg  Bootlist is set to the boot disk: hdisk0 blv=hd5


  •   $ exportvg altinst_rootvg  $ importvg -vg rootvgcopy hdisk1

For disk busy of in use errors


These commands might be useful.

List all the disks and repository in use by a Shared Storage Pool - DO NOT FIDDLE OR USE THESE DISKS

  $ lscluster -d

List the disks that are definitely not in use.

  $ lspv -free

ODM might be tracking their previous use or the disk header is indicating its use
If you have POWER9 NVME,  try

  $ lspv -unused

Options are (as root) zap the front of the disk (safer than dd command).

  # cleandisk -?

Read and you decide how to screw up your disks!

Also, worth knowing about 

  # chpv –C hdisk3


  • -C option:  Clears the owning volume manager from a disk. This flag is only valid when running as the root user. This command might fail to clear LVM as the owning volume manager if the disk is part of an imported LVM volume group.

Regression Test – for the virtual networks & virtual disks

This section should generate the details that you need to check.

The initial list is the

  1. Virtual disks:
       lsmap -all
  2. Virtual networks:
       lsdev | grep SEA ; lsdev -attr -dev <your-SEA's>
  3. The VIOS it is on the network and IP settings: as root
      ifconfig -a
  4. Useful scripts that system admin team use (like my 'n tools' for SSP,
  5. Set up the automatic viosbr backup, sending the backups to a remote server,
  6. Note the padmin users and password.

Get the important files moved over for you

VIOS viosupgrade command has a nice -g filename option.  This file contains full file names of files that you would like it to copy to the alternative disk once installed with VIOS 3.1.  This can save you a lot of time.

The files get copied to /home/padmin/backup_files/<then the full filename you requested>

Here are my suggestions

  1. /etc/motd
  2. /etc/environment
  3. /etc/netsvc.conf
  4. /etc/resolv.conf
  5. /etc/hosts
  6. /etc/inittab
  7. /etc/ntp.conf  
Some "daft people" add empty lines or comments - don't go there!
If using LDAP, save the configuration files
If using local user and password controls:
  1. /etc/group
  2. /etc/passwd
  3. /etc/security/passwd

Output of the following commands:

For padmin user and for the root user:

   crontab -l 
  lsmap -all  lsdev   lsdev -dev
  for i in $(lsdev -dev hdisk* -field name)     # disks and attributes  do      echo ==========  $i      lsdev -dev $i -attr  done    for i in $(lsdev -dev en* -field name)    # networks and attributes  do      echo ==========  $i      lsdev -dev $i -attr  done

If you have Nigel's SSP tools:

  1. /home/padmin/ncluster
  2. /home/padmin/nlu
  3. /home/padmin/nmap
  4. /home/padmin/npool
  5. /home/padmin/nslim

Save any nmon or topas output file you might want to compare later for a performance check

  tar cvf /tmp/var_perf_daily.tar  /var/perf/daily/*

Double check for any local useful script in:

  • /home/padmin
  • /usr/lbin

Not for use but for reference:

  • /etc/tunables/nextboot
  • /etc/tunables/lastboot

With an SSP

Make sure you have a running SSP cluster to join back too after you VIOS viosupgrade each VIOS.

NIM viosupgrade 

Check the manual page in IBM KnowledgeCenter or on the NIM server at AIX 7.2 TL3 +
  To perform the bosinst type of upgrade operation, use the following syntax:           viosupgrade -t bosinst -n hostname -m ios_mksysbname         -p spotname {-a RootVGCloneddisk: ... | -r RootVGInstallDisk: ...| -s}         [-b BackupFileResource] [-c] [-e resources: ...] [-v]    To perform the altdisk type of upgrade operation type the following command:           viosupgrade -t altdisk -n hostname -m ios_mksysbname         -a rootvgcloneddisk [-b BackupFileResource] [-c]  [-e resources: ...] [-v]  KEY  -t bosinst (overwrited the rootvg)   -t altdisk (make rootvg on a new alternative disk and   leaves the rootvg untouched and called old_rootvg)  -n redvios2   -m VIOS31_mksysb  -p VIOS31_spot   {-a RootVGCloneddisk: ... | -r RootVGInstallDisk: ...| -s}         [-b BackupFileResource] [-c] [-e resources: ...] [-v]    -c  This is a SSP VIOS  -v   Validate the install parameters and VIOS state

On the NIM server

   viosupgrade -v  

Key: v = validate

Nice feature to confirm you have everything set up right before the upgrade.

It makes a dozen checks or so and the details are saved in a log file.

  # viosupgrade -v -t altdisk -n redvios2  -m vios31010_mksysb -a hdisk3 -c  Welcome to viosupgrade tool.  Triggered validation..  Check log files for more information,  Log file for 'redvios2' is: '/var/adm/ras/ioslogs/redvios2_10289506_Fri_Nov_16_10:29:13_2018.log'.  Please wait for completion..  -----------------------------------  Validation successful for VIO Servers:  redvios2  -----------------------------------      # viosupgrade -v -t bosinst  -n redvios2  -m vios31010_mksysb  -p vios31010_spot  -a hdisk3 -c  Welcome to viosupgrade tool.  Triggered validation..  Check log files for more information,  Log file for 'redvios2' is: '/var/adm/ras/ioslogs/redvios2_10289514_Fri_Nov_16_10:41:06_2018.log'.  Please wait for completion..  -----------------------------------  Validation successful for VIO Servers:  redvios2  -----------------------------------  #

NIM Resources Required for the VIOS

On a regular NIM install of a new VIOS, you create a NIM machine for the VIOS. That does not work for a VIOS upgrade.  

wrong old NIM way

You need to define a Managed Resources:

  • HMC,
  • CEC (server) and
  • the VIOS as a VIOS (not a NIM machine resource)

UPDATE: Comment on the following statement: "No idea what a Password file is?

  • Well I have a friend that does :-)  Marc-Eric Kahle in Germany
  • This is a NIM tool to allow remotely getting a console from the NIM server.  Not tried this method myself yet.

  • You need to install DSM.core on the NIM Master and run:

      # dpasswd -f /export/nim/passwd/7063CR1hmc_passwd -U hscroot -P abc1234  Password file is /export/nim/passwd/7063CR1hmc_passwd  Password file created.
  • Download dsm.core?
    See this link for details: 


nim 3

nim 4

To get a VIOS adopted but a NIM server, on the VIOS as padmin run

  •   remote_management [ -interface Interface ] Master  remote_management -disable
  • To enable remote_management by the NIM master (here called nim32), type:
  •   $ remote_management -interface en0 nim32  nimsh:2:wait:/usr/bin/startsrc -g nimclient >/dev/console 2>&1  0513-059 The nimsh Subsystem has been started. Subsystem PID is 11337892.
  • To disable remote_management, type:
  •   remote_management -disable

The NIM viosupgrade command can find the name VIOS as a NIM resource and not a NIM machine.

Log files - just in case

  • NIM viosupgrade tells you about the log file it is using
  •   viosupgrade -l -q 
  • This command outputs the the current or final state
  • Log files for debug purpose:
    • On NIM server:  viosupgrade command from NIM
      • viosupgrade command logs: /var/adm/ras/ioslogs/*
      • NIM Command Logs:  /var/adm/ras/nim*
    • On VIOS: viosupgrade command from VIOS 
      • viosupgrade command logs: /var/adm/ras/ioslogs/*
      • viosupgrade restore logs:  /home/ios/logs/viosup_restore.log
      • viosupgrade restore logs:  /home/ios/logs/viosupg_status.log
      • viosbr backup logs: /home/ios/logs/backup_trace*
      • viosbr restore logs:  /home/ios/logs/restore_trace*

Fifth method: Disruptive VIOS 3.1 upgrade with SSP 

Alternative Whole SSP down method Highly Disruptive ? Not for Production

  • Upgrade to VIOS 2.2.6.LATEST
  • On each VIOS: viosbr -backup  -> Save off the VIOS
  • Stop ALL LPARs
  • Stop ALL VIOS in the entire pool
  • Then, for each SSP VIOS in turn
    • Upgrade VIOS with complete overwrite
    • viosbr -recover  -> including SSP backup

Nigel’s Ultra Blunt Opinion about Production Upgrades

  • In my humble opinion:
    • Fresh install VIOS 3.1 for new servers – no brainer Do not upgrade your Production to VIOS 3.1 until mid Q1 2019 to allow any bug + fixes to arrive.
  • Why?
    • Messing up your Production VIOS & its LPARs is extremely painful
  • In the meantime run tests on:
    • The upgrade process
    • Prepare to backup / reinstall your non-VIOS applications & data
    • New features of VIOS 3.1 – in particular the new iSCSI features (if you like iSCSI)
  • I am sure once at VIOS 3.1 that it all works fine

VIOS Tuning Options

  • Any AIX 6.1 tuning option disappears during the install and upgrade
    • So all the old mistakes are wiped out = a good thing!
  • AIX development reset the tuning defaults in AIX 7.1 & 7.2 for best performance
    • AIX 7.2 tuning options are different and more of them 
    • AIX6.1 tuning does NOT apply to AIX 7.2
    • Do not just apply your old VIOS 2 tune-up script on VIOS 3
  • You may:
    • Monitor performance for a week to double check
    • Run the VIOS advisor ? “part command” to see what it suggests
    • If you have a problem:   Raise a PMR - have a VIOS snap and perfPMR ready
    • Don’t start randomly adding AIX 6 tuning 

Summary: VIOS 3.1 Upgrade in practice – in general

  • New Features: AIX 7.2, RAS, performance + iSCSI LUN to vSCSI 
    • Nice to have but not massive functional differences 2.2.6.LATEST  to 3.1
    • Everything works the same as before
  • Clean slate: fresh overwrite install VIOS flushes out the “crufty”
  • It is your job to handle the extra applications (code & data)
  • You run Dual VIOS for RAS & to make upgrades simple
  • Don’t upgrade at peak loads
  • Think about a simple regression test before you start. 
  •  "Backup, backup, and backup"
    • At least two of the following methods: disk clone, mksysb of the whole VIOS, viosbr, alternative disk install
  • Practice before upgrading Production

Summary: VIOS 3.1 Upgrade in practice – on the day

  1. Don’t do it – fresh install a new server or evacuate old servers first
  2. Regardless of the method: viosbr -backup & save off the VIOS
  3. Always install to an alternate disk
  4. Don’t forget the non-VIOS apps (code and data) and rootvg virtual disk or virtual DVD
  5. Preferred methods [[in Nigel’s opinion ]]
    1. For SSP users, always use method b)
    2. updateios to 2.2.6.LATEST, then NFS the VIOS 3.1 mksysb to the target VIOS & run VIOS viosupdate
    3. Traditional Manual still a good method: backup, then scratch install VIOS via DVD, USB flash drive, HMC, NIM and recover metadata 
    4. NIM viosupdate -bosinst (for SSP get to 2.2.6.LATEST first)
    5. NIM viosupdate -altdisk ? not tested successfully by Nigel yet (out of time)
  6. Run the regression test commands & compare with the saved output

Post upgrade Checks - NEW SECTION

On the VIOS - Blue parts are new and particularly for SSP users.

  1. Check the VIOS timezone with cfgassist set it and you have to reboot.
  2. Check the VIOS date - set it with
  3. Setup any network time protocol server (ntp) service.
  4. Set up /etc/netsvc.conf especially.
  5. If using SSP, you don't want a missing DNS to stop SSP communication, to search locally for the hostname before DNS for example:
    •   hosts = local, bind
  6. Setup /etc/hosts, if using SSP include all the VIOS in the SSP.
  7. Check the new VIOS level: 
  8. Check the paging space size and layout: (as root)
      lsps -a
  9. Check file system sizes - some like /tmp and /home might be back to the default size:
       df -g
  10. Check  users: (as root)
      lsuser ALL
  11. Your original Virtual Optical Library is gone - you need to re-create it and get the .iso images restored
  12. Check the tuning options: (as root):
    •   ioo -L  lvm -L  nfso -L  no -L  raso -L  schedo -L  vmo -L

On your VIOS client LPARs/VMs:

  1. Check your disk paths:
    • Important to do this BEFORE upgrading your second VIOS of a pair
  2. Enable your paths, if not Enabled: (as root)
      lspath | awk '{print "chpath -s enable -l " $2 " -p " $3 }' | ksh

New section:  Hard-won experience

Disk Renumbering

  • The hdisk order in AIX and VIOS is determined by the order of discovery at the initial installation time and later added disk get higher hdisk numbers
  • VIOS 3.1 is a fresh installation
  • So if you added disks after the initial VIOS 2 installation time - expect the hdisks to be reordered during the upgrade
  • The viosupgrade command knows this and deals with this issue but you might have a shock like you asked the command to use alternative disk hdisk38 but it becomes hdisk6
  • If you (for example) run viosupgrade -l -a hdisk2 . . . 
  • Once you boot the VIOS 3.1 the old hdisk2 might be still hdisk 2 or maybe a different hdisk number - this behaviour is expected.
  • Unless you have a simple low number of internal disks from the original install and no update history - expect a different hdisk number.

VIOS mirror break, upgrade and remirror

  • Many of my VIOS 2 had rootvg mirrored and I broke the mirror
    • VIOS unmirrorios hdiskX
    • Note the named hdiskX is the disk which remains in use
    • Removed the empty disk from rootvg (cfgassist) and then upgraded to VIOS 3.1 naming the unused disk as the alternative disk for the VIOS 3.1 installation
  • Then when happy VIOS 3.1 is working OK, remirror by using the mirrorios command
  • I have many problems as I get "that disk is in use"
  • I tried the cleandisk, chpv .C hdiskX, create a VG on it and remove the VG etc, but it is not working - create a PMR with IBM Support

VIOS with SSP pairs

  • If you have one VIOS using SSP of a pair currently being upgraded and then run the VIOS viosupgrade -l . . .on the second VIOS, I get 
    "Cluster state in not correct"   (including the typographical error).
  • The translation is: You can't seriously want both VIO Servers of this pair to be down at the same time!
  • It is a good check but if you evacuated the server to upgrade the VIOSs, it means you can't do both at the same time.
  • As the viosupgrade take about 10 to 15 minutes on fast disks (SSD or NVMe) - it is not too bad.  If the first upgrade fails for some reason, at least you get to fix it and learn before you fail the 2nd VIOS.
  • You will have to wait until the first VIOS is upgraded,rebooted and operating in the SSP cluster again.

When you get to VIOS 3.1, I fully expect it to
be Rock Solid as normal for new VIOS releases

Questions and Answers - During the Upgrading to VIOS 3.1 Webinar

  • This section has not been officially reviewed.
  1. What about application certifications? Is VIOS 3.1 certified by SAP HANA?
    • Ask SAP or the application vendor about their certification.
    • It is not IBM's responsibility and IBM does not make statements on their behalf.
  2. Is it still possible to backup VIOS 3.1 via backupios and mksysb and then restore via NIM (I assume NIM on AIX 7.2 is required)?
    • Yes.
  3. On VIOS 2.2, because it is based on AIX 6.1 it uses SMT-4 (if available on your hardware). Is it beneficial for VIOS 3.1 to enable SMT 4 or even 8 on POWER9?
    • Yes.
    • The SMT number is set automatically.  There is no need to fiddle.
    • With VIOS 3.1+ on POWER9, it uses SMT=8.
  4. Why has IVM been removed?
    • There is little remaining use of IVM with clients. 
    • Sure there are some users and we liked using IVM too very much in POWER5 days like the p505 1U server.
    • IBM can't justify the development and testing costs. 
    • Some people think IBM is a bottomless pit of money and people time - it is not true.
    • We understand IVM users, love it and it is fast.
    • Upgrade to the latest VIOS 2.2.6.LATEST. Wait 3 years (with support) and then power off the server.  It is probably 8 years old by then and overdue for retirement and running unsupported everything.
  5. Do I need any special license to use VIOS 3.1 on servers where I am currently using VIOS 2.x?
    • Your PowerVM license covers the Hypervisor and VIOS and is not version specific, so you are entitled.
  6. Is VIOS IFL compatible for POWER 980?
    • VIOS is AIX so it does not use IFLs.
    • As far as I know, you can have a 100% IFL enabled server and run Linux LPARs as well as VIOS LPARs on it.
    • The VIOS, though based on AIX, are not considered AIX LPARs for IFL usage, they are in their separate own category.
  7. I have dual VIOS server (lets say A and B). I want to avoid upgrading (as it is problematic) but I have no LPM nor possibility to create move VIOS partition (due to hardware). I would like to do do a fresh installation of VIOS 3.1 on VIOS B. Is it possible to set up SEA failover/NPIV/maybe SSP as on the previous installation so there would be mixed VIOS versions (A - 2.X and B - 3.1) for some time for testing purposes? Is it possible from support and technical point of view?
    • Yes, that all works fine.
  8. Does VIOS 3.1 support vNIC?
    • Yes.
  9. Is LDAP supported on VIOS?
    • Yes and not changed with VIOS 3.1
    • You have to save the LDAP configuration and recover that yourself but that is very like what you do for each new server and new pair of VIOSs.
  10. The "flash" version has the full mksysb?
    • Correct
  11. Can you use the "flash" .iso if installing from an HMC?
    • Yes, you can use the flash .iso via HMC
    • I can't think of a reason why that would not work so expect that you can. BTW - if you plug a USB device into an HMC, you can use the lsmediadev CLI command to see where it is mounted.
  12. Is it possible to do "alternative disk" installation on additional disk so I can quickly boot back to 2.2.x?
    • Correct. Done by hand or by using any of the three viosupgrade options.
    • Don't get confused by the NIM viosupgrade -t altdisk  = that -t option name is confusing.
  13. Before upgrading the VIOS to 3.1 is there any compatibility check we need to do with respect for the SAN?
    • VIOS 2.2 and VIOS 3.1 uses the SAN in the same way.
  14. Does this alternative disk install use a caching disk on the NIM server or NFS ? or do we have the choice
    • We are not familiar with the "caching disk" method. 
    • We use the standard NIM method of pushing a mksysb and spot to the client LPAR.
  15. Is "Rollback" possible if I choose NIM viosupgrade -bosint mode?
    • Yes, you have an alternative disk fallback.  Use the viosupgrade -bosint with the -a hdiskX  option saves the original VIOS disk as a fallback option.
  16. Apparently, viosupgrade command is in 2.2.6.LATEST and running it upgrades the VIOS to 3.1 and AIX to 7.2. So somehow those ~5G of mksysb for fresh installation is in fact inside 2.2.6.LATEST itself?  Is my question clear.
    • Your question is as clear as mud.  VIOS 3.1 installation image downloaded from ESS includes AIX 7.2. Just like VIOS 2.2 installation image includes AIX 6.1.
    • In VIOS 2.2.6.LATEST, you have the viosupgrade command binary but no VIOS 3.1 installation image.
    • You have to download the VIOS 3.1 .iso image, extract the mksysb and hand the mksysb file to the viosupgrade command.
  17. Based on the list of files Nigel's example copied with the -g, I'm expecting that NTP configuration wouldn't be preserved by default either?
    • Correct.  It is exactly like overwriting the rootvg and recovered the VIOS metadata - that is exactly what it did.  Everyone need to have clear documentation on the additional set up they use . . . because you have to do that for every new POWER server.
  18. I read in a doc that after upgrade to VIOS 3.1 through "viosupgrade" tool, disk attributes like "reserve_policy" and "queue_depth" will be set to defaults, i.e "no_reserve" and "1" respectively. If "viosupgrade" tool reboots VIOS at this situation, disk SCSI reservations affects client LPARs and client LPAR ends up in a hung state. How to overcome this situation?
    • UPDATE: We checked - the viosbr backup of the metadata seems to include these settings.
    • The first boot will not have the settings. The backup is restored - including the settings
    • The Second boot means the settings are active.
  19. So VIOS IP on VLAN is OK, even if VLAN is on the SEA?
    • We are not encouraging IP addresses on the SEA.
    • VIOS Best Practise is to have 2 virtual Ethernet adapters per Virtual network in the VIOS.
    • One is used to create the SEA (bridge)
    • The other one has the IP address on it (for access).
    • This configuration increases performance.
  20. As for the iSCSI on VIOS - wouldn't it be better to use iSCSI LUNs on LPARs directly (AIX/Linux) rather than iSCSI-VIOS-vscsi-Client?
    vSCSI would introduce additional layers and might impact performance IMHO.  It is like vSCSI vs NPIV.
    • iSCSI on the VIOS make the VIOS client a regular generic simpler vSCSI client with no iSCSI hardware requirements.
    • IMHO vSCSI has minimal impact.
    • You sound like an iSCSI guru - we need to learn more about iSCSI ourselves.
  21. Is there any way to install automatically any 3rd party MPIO during viosupgrade
    • Nope.  I guess NIM guru's with a NIM viosupgrade -  could supply a script to do that.
    • The trend for years is moving to AIX MPIO.
  22. Did you mention what needs to download/extracted  (besides 2.2.6.LATEST) to run the viosupgrade?
    • The VIOS viosupgrade command arrives in the VIOS 2.2.6.LATEST update.
    • There is nothing else to download.
  23. How to get from the VIOS installation ISO to mksysb 3.1
    • See the section "Script to extract the two DVD .iso to make a single mksysb" - which includes extracting it from the Flash VIOS installation images
  24. Password for padmin user is reset when upgrade? And reset when restored?
    • I am sure the padmin password is not in the VIOS metadata backup.
    • I just set it on the first login prompt to the previously used password.
  25. Is the VIOS 3.1 default SMT8?
    • Can't imagine why you need to know.
    • It is set by the rules with VIOS 3.1 is SMT=8 on my POWER9 server.
    • Older POWER hardware that does not support SMT=8 is set appropriately.
  26. What are the IVM replacement options?
    • HMC or a virtual HMC.
  27. Is there is any compatibility with PowerVC?
    • PowerVC "talks" to the VIOS via the HMC - all current HMC versions are supports with PowerVC and VIOS.
  28. Is the IBM Tivoli Monitoring config preserved?
    • Have not looked into this issue but we are sure the viosbr backup command does not include IBM Tivoli Monitoring set up or data
    • So you will have to save that and restore it yourself - much like a new server and new pair of VIOS.
    • Or if you are not worried about missing data, follow your fresh installation of VIOS procedure.
  29. Maybe the command rulescfgset set rules after installation to 3.1
    • Yes it sure does - just like on VIOS 2.2
  30. Is this mksysb image created from latest VIOS3.1 image
    • Yes it is extracted from the VIOS 3.1 install .iso
  31. Does it have any HMC minimal version?
    • Nope.
    • The HMC is dependent much more on the Server system firmware.
  32. Is there any system firmware minimum requirements before upgrading to VIOS 3.1?
    • Nope but always use the latest firmware at all times or n-1 version if you conservative.

- - - The End - - -


    Additional Information

    If you find errors or have a question, email me: 

    • Subject: Upgrading to VIOS 3.1
    • E-mail: n a g @ u k . i b m . c o m  

    Also, find me on

    Document Location


    [{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"SWG10","label":"AIX"},"Component":"","Platform":[{"code":"PF002","label":"AIX"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}},{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"HW1W1","label":"Power ->PowerLinux"},"Component":"","Platform":[{"code":"PF016","label":"Linux"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"","label":""}},{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"SWG60","label":"IBM i"},"Component":"","Platform":[{"code":"PF012","label":"IBM i"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB08","label":"Cognitive Systems"}}]

    Document Information

    Modified date:
    28 July 2020