Updating the IBM VAAI driver in ESXi
anthonyv 2000004B9K Visits (15372)
VMware vSphere 4.1 brings in a brilliant new function to offload storage related workload. This function is called VAAI (vStorage APIs for Array Integration) and requires that your SAN storage supports VAAI and that your ESX or ESXi server has a driver installed to utilize it.
IBM first supported VAAI with the IBM XIV using an IBM supplied VAAI driver. IBM then added support to the Storwize V7000 and SVC, so IBM has now released a new VAAI driver to support all three products at once. You can find the driv
I discovered some quirks in the process to update the IBM VAAI driver from version 184.108.40.206 to version 220.127.116.11 on VMware ESXi. The benefit in moving to version 18.104.22.168 is that the updated driver supports both the IBM XIV as well as the Storwize V7000 and IBM SVC.
I downloaded the new driver from here and which uses the following naming convention:
Version 22.214.171.124 is named IBM-
The last 6 digits in the file name is what differentiates them. However when I ran the --query command against an ESXi box, I got confused:
vihostupdate.pl --server 10.1.60.10 --username root --password passw0rd -query ---------Bulletin ID--------- -----Installed----- ----
Both the uplevel and downlevel VAAI driver files start with
vihostupdate.pl --server 10.1.60.11 --username root --password passw0rd --scan --bundle IBM-
To perform the upgrade I first used vMotion to move all guests off the server I was upgrading. I then placed the server in maintenance mode and installed the new driver:
vicfg-hostops.pl --server 10.1.60.11 --username root --password passw0rd --operation enter vihostupdate.pl --server 10.1.60.11 --username root --password passw0rd --install -bundle IBM-
I got the following messages:
Please wait patch installation is in progress ... The update completed successfully, but the system needs to be rebooted for the changes to be effective.
I then rebooted the server and then finally took it out of maintenance mode:
vicfg-hostops.pl --server 10.1.60.11 --username root --password passw0rd --operation reboot vicfg-hostops.pl --server 10.1.60.11 --username root --password passw0rd --operation exit
There are no commands needed to activate VAAI or claim VAAI capable devices in ESXi. You simply need to confirm that the both boxes shown in the example below have the number 1 in them (for hardware accelerated move and for fast init):
To test VAAI I normally do a storage migration (storage vMotion) moving a VMDK between datastores on the same storage device. What you should see is very little VMware to Storage I/O, as I depicted in this blog post and this blog post.
My colleague Alexandre Chabrol from Montpellier Benchmarking Center also helped me out with the ESXCLI commands to control VAAI. We can confirm the state of each of the three VAAI functions and switch them off and on.
esxcfg-advcfg.pl --server 10.1.60.11 --username root --password password -g /Dat
Final thought: Most if not all of these commands can be done via the vSphere Client GUI, you do not need to use CLI. But I am surprised how many people like to use the CLI and want to see example syntax. Got a preference yourself? Love to hear about your experiences.
*** Update February 20, 2012 ***
The IBM Storage Device Driver for VMware VAAI was updated to version 126.96.36.199 in February 2012. This new version fixes a rare case where XIV, Storwize V7000, or SVC LUNs are not claimed by the IBM Storage device driver. If you are using version 188.8.131.52 without issue, there is no need to upgrade. I have updated this post to reflect the new version.