VMware vSphere 4.1 brings in a brilliant new function to offload storage related workload. This function is called VAAI (vStorage APIs for Array Integration) and requires that your SAN storage supports VAAI and that your ESX or ESXi server has a driver installed to utilize it.
IBM first supported VAAI with the IBM XIV using an IBM supplied VAAI driver. IBM then added support to the Storwize V7000 and SVC, so IBM has now released a new VAAI driver to support all three products at once. You can find the driver, installation guide and release notes at this URL.
I discovered some quirks in the process to update the IBM VAAI driver from version 18.104.22.168 to version 22.214.171.124 on VMware ESXi. The benefit in moving to version 126.96.36.199 is that the updated driver supports both the IBM XIV as well as the Storwize V7000 and IBM SVC.
I downloaded the new driver from here and which uses the following naming convention:
Version 188.8.131.52 is named IBM-ibm_vaaip_module-268846-offline_bundle-395553.zip Version 184.108.40.206 is named IBM-ibm_vaaip_module-268846-offline_bundle-406056.zip Version 220.127.116.11 is named IBM-ibm_vaaip_module-268846-offline_bundle-613937.zip
The last 6 digits in the file name is what differentiates them. However when I ran the --query command against an ESXi box, I got confused:
Both the uplevel and downlevel VAAI driver files start with: IBM-ibm_vaaip_module-268846 So which one is installed? The 18.104.22.168 version or the 22.214.171.124 version? I ran the following command to confirm if the updated bulletin applies (the one ending in
613937 ). This confirmed my ESXi server was using version 126.96.36.199 and needed an upgrade to version 188.8.131.52.
vihostupdate.pl --server 10.1.60.11 --username root --password passw0rd --scan --bundle IBM-ibm_vaaip_module-268846-offline_bundle-613937.zip
The bulletins which apply to but are not yet installed on this ESX host are listed.
---------Bulletin ID--------- ----------------Summary-----------------
IBM-ibm_vaaip_module-268846 vmware-esx-ibm-vaaip-module: ESX release
To perform the upgrade I first used vMotion to move all guests off the server I was upgrading. I then placed the server in maintenance mode and installed the new driver:
There are no commands needed to activate VAAI or claim VAAI capable devices in ESXi. You simply need to confirm that the both boxes shown in the example below have the number 1 in them (for hardware accelerated move and for fast init):
To test VAAI I normally do a storage migration (storage vMotion) moving a VMDK between datastores on the same storage device. What you should see is very little VMware to Storage I/O, as I depicted in this blog post and this blog post.
My colleague Alexandre Chabrol from Montpellier Benchmarking Center also helped me out with the ESXCLI commands to control VAAI. We can confirm the state of each of the three VAAI functions and switch them off and on. We use the -g switch to display them, the -s 0 switch to turn them off and the -s 1 switch to turn them on. In this example I first confirm that VAAI is active for hardware accelerated moves, hardware accelerated inititialization (write zeros) and then hardware assisted locking. I then disable and enable hardware accelerated moves.
esxcfg-advcfg.pl --server 10.1.60.11 --username root --password password -g /DataMover/HardwareAcceleratedMove
Value of HardwareAcceleratedMove is 1
esxcfg-advcfg.pl --server 10.1.60.11 --username root --password password -g /DataMover/HardwareAcceleratedInit
Value of HardwareAcceleratedInit is 1
esxcfg-advcfg.pl --server 10.1.60.11 --username root --password password -g /VMFS3/HardwareAcceleratedLocking
Value of HardwareAcceleratedLocking is 1
esxcfg-advcfg.pl --server 10.1.60.11 --username root --password password -s 0 /DataMover/HardwareAcceleratedMove
Value of HardwareAcceleratedMove is 0
esxcfg-advcfg.pl --server 10.1.60.11 --username root --password password -s 1 /DataMover/HardwareAcceleratedMove
Value of HardwareAcceleratedMove is 1
Final thought: Most if not all of these commands can be done via the vSphere Client GUI, you do not need to use CLI. But I am surprised how many people like to use the CLI and want to see example syntax. Got a preference yourself? Love to hear about your experiences.
*** Update February 20, 2012 ***
The IBM Storage Device Driver for VMware VAAI was updated to version 184.108.40.206 in February 2012. This new version fixes a rare case where XIV, Storwize V7000, or SVC LUNs are not claimed by the IBM Storage device driver. If you are using version 220.127.116.11 without issue, there is no need to upgrade. I have updated this post to reflect the new version.
Here is a little test. Check your documentation: Do you know how to power down and power up the equipment in your computer room? If you had to power off your site in a hurry would you know how? If you wanted to script a shutdown, could you do it?
Here are some hints and tips that might help you with some of my favourite products:
The process to power up or down your DS8000 is documented in the Information Center here.
If you want to script powering off a DS8000 storage unit you can use thechsu -pwroff DSCLI command. This command will shut down and power off the specified DS8000 unit. Be careful before powering off the unit to ensure that all I/O activity has been stopped. An example of the command is shown below. Your machine will have a different IP address, password and serial number to mine. Note the serial number always ends in zero because we send the command to the storage unit.
The IBM Storage Management Console for VMware vCenter version 2.5.1 is now available for download and install. This version supports XIV, SVC and Storwize V7000 as per the versions on the following table (the big change being support for version 6.2):
If you want to see a video showing the capabilities of the new console, check out this link.
After installing the console, you will get this lovely new icon:
Start it up and select the option to add new storage, you now get three choices:
If your using SVC or Storwize V7000 you need to specify an SSH private key. This key MUST be in Open SSH format. This caused me a problem as I kept getting this message when trying to add my Storwize V7000 to the plug-in:
Unable to connect to 10.1.60.107. Please check your network connection, user name, and other credentials.
I could use the same IP address, userid and SSH private key to logon to the Storwize V7000 using putty, so I knew none of these things were wrong.
I reread the Installation Instructions closely and realized my mistake. It clearly states:
Important: The private SSH key must be in the OpenSSH format.
If your key is not in the OpenSSH format, you can use a certified
OpenSSH conversion utility.
I pondered what conversion utility I could use when I realized I had the utility all the time:Puttygen. I opened PuttyGen, imported my private key (the .ppk file) and exported my SSH private key using OpenSSH format. You don't need to do anything with the public key.
I was then able to add the Storwize V7000 by specifying the private SSH key exported using OpenSSH format.
Now I have both IBM XIV and Storwize V7000 in the vCenter plug-in and can get detailed information about and manipulate both. In this example I have highlighted the Storwize V7000, revealing it is on 18.104.22.168 firmware.
I was tempted to detail all the many things you can do with the plug-in, but your better off watching the video via this link.
So are you using the plug-in? Have you upgraded to version 2.5.1 yet? Comments very welcome!
I have received this question several times, so it's clearly something people are interested in.
The Storwize V7000 has two controllers known as node canisters. It's an active/active storage controller, in that both node canisters are processing I/O at any time and any volume can be happily accessed via either node canister.
The question then gets asked: what happens if a node canister fails and can I test this? The answer to the question of failure is that the second node canister will handle all the I/O on its own. Your host multipathing driver will switch to the remaining paths and life will go on. We know this works because doing a firmware upgrade takes one node canister offline at a time, so if you have already done a firmware update, then you have already tested node canister fail over. But what if you want to test this discretely? There are four ways:
Walk up to the machine and physically pull out a node canister. This is a bit extreme and is NOT recommended.
Power off a node canister using the CLI (using the satask stopnode command). This will work for the purposes of testing node failure, but the only way to power on the node canister is to pull it out and reinsert it. This is again a bit extreme and is not recommended. This is also different to an SVC, since each SVC has it's own power on/off button.
Use the CLI to remove one node from the I/O group (using the svctask rmnode command). This works on an SVC because the nodes are physically separate. On a Storwize V7000 the nodes live in the same enclosure and a candidate node will immediately be added back to the cluster, so as a test this is not that helpful.
Place one node into service state and leave it there will you check all your hosts. This is my recommended method.
First up this test assumes there is NOTHING else wrong with your Storwize V7000. We are not testing multiple failure here. You need to confirm the Recommend Actions panel as shown below, contains no items. If there are errors listed, fix them first.
Once we are certain our Storwize V7000 is clean and ready for test, we need to connect via the Service Assistant Web GUI. If you have not set up access to the service assistant, please read this blog post first.
So what's the process?
Firstly logon to the service assistant on node 1 and place node 2 into service state. I chose node 2 because normally node 1 is the configuration node (the node that owns the cluster IP address). You need to confirm your connected to node 1 (check at top right) and select node 2 (from the Change Node menu) and then choose to Enter Service State from the drop down and hit GO.
You will get this message confirming your placing node 2 into service state. If it looks correct, select OK.
The GUI will pause on this screen for a short period. Wait for the OK button to un-grey.
You will eventually get to this with Node 1 Active and Node 2 in Service.
Node 2 is now offline. Go and confirm that everything is working as desired on your hosts (half your paths will be offline but your hosts should still be able to access the Storwize V7000 via the other node canister).
When your host checking is complete, you can use the same drop down to Exit Service State on node2 and select GO.
You will get a pop up window to confirm your selection. If the window looks correct, select OK.
You will get the following panel. You will need to wait for the OK button to become available (to un-grey).
Provided both nodes now show as Active, your test is now complete.
Over at SearchStorage.com.AU they recently published an article entitled Six reasons to adopt storage virtualisation. You can find the article here. The six given reasons are:
Storage virtualisation reduces complexity
Storage virtualisation makes it easier to allocate storage
Better disaster recovery
Better tiered storage
Virtual storage improves server virtualisation
Virtual storage lets you take advantage of advanced virtualisation features
Its a well written article and I agree with every point. But one could be forgiven for reading the article and thinking that either storage virtualisation is new, or that storage virtualisation is something you might consider AFTER doing server virtualisation. Both of which are not true.
IBM embraced storage virtualisation in June 2003 when we announced our SAN Volume Controller (the IBM SVC). I even found a CNET.com article from way back then. You can find it here (the image below is a screen capture of that CNET website).
IBM's SVC product has been enhanced repeatedly since 2003 with an enormous list of supported host servers and backend storage controllers. We have added new functions every year including Easy Tier, split cluster, VAAI, an enhanced GUI and a new form factor for the SVC code in the form of the Storwize V7000.
So let me give you a seventh reason for adopting storage virtualisation: A vendor who has shown genuine support for this technology. No vendor has embraced storage virtualisation with more enthusiasm than IBM. We have an industry leading solution with phenomenalSPC benchmarks, an enormous number of case studies and an architecture that does not lock you in. Indeed it is an architecture that can grow as you grow and that can be upgraded without disruption.
So please consider storage virtualisation from IBM, using either the SVC or the Storwize V7000. If your in Australia, we have demo centers dotted around the country. Many of our Business Partners can also demonstrate IBM storage virtualisation using their own Storwize V7000s. If your in Melbourne feel free to give me a call and schedule a time to drop into Southgate.
The Storwize V7000 and SVC release 6.1 introduced a new WEB GUI interface to assist with service issues, known as the Service Assistant. The Service Assistant interface is a browser-based GUI that is used to service your nodes. Much of what you traditionally did with the SVC front panel can all be done using the Service Assistant GUI. You can see a screen capture of the Service Assistant below:
While I would like to be optimistic and hope that you will never have to use the Service Assistant, you should always ensure your toolkit is equipped with every possible tool. I say this because one thing I have noted is that the majority of installs are not configuring the Service Assistant IP addresses. This is particularly apparent as clients upgrade their SVC clusters to release 6.1.
By default on Storwize V7000, the Service Assistant is accessible on IP addresseshttps://192.168.70.121 for node 1 and https://192.168.70.122 for node 2 (don't try and point your browser at them right now, as your network routing won't work - you would need to set your laptop IP address to the same subnet and be on the same switch. Details to do that are here). For SVC there are no default IP addresses, although we traditionally asked the client to configure one service address per cluster. The best thing for you to do is approach your network admin and ask for two more IP addresses for each Storwize V7000 and/or SVC I/O group. Once you have these two extra IP addresses, record them somewhere and then set them using the normal GUI.
Its an easy five step process as shown in the screen capture below. Go to the Configuration group and then choose Network (step 1). From there select Service IP addresses (step 2) and the relevant node canister (step 3). Choose port one or port two (step 4) and then set the IP address, mask and gateway (step 5).
You can also set them using CLI (replace the word panelname with the panel name of each node, which you can get using the svcinfo lsnode command).
If you forget these IP addresses, you can reset them using the same CLI commands or using the Initialization tool as documented here.
Finally having set the IP addresses, visit the service assistant by pointing your browser at each address. This is just to confirm you can access it. You logon with your Superuser password. With the process complete, ensure the IP addresses are clearly documented and filed away. So now if requested, you will be able to perform recovery tasks (in the unlikely chance they are needed). If for some reason your browser keeps bringing you to the normal GUI rather than the Service Assistance GUI, just add /service to the URL, e.g. browse to https://10.10.10.10/service rather than https://10.10.10.10.
So what should you do now?
If your an SVC customer on SVC code version v5 and below, please get two IP addresses allocated for each SVC I/O group, so you can set them the moment you upgrade to V6. Do this once the upgrade is complete.
If your an existing Storwize V7000 client or an SVC client already on V6.1 or V6.2 code, then hopefully you should already have set the service IP addresses. If not, please do so and test them.
I recently got a great email from an IBMer in the Netherlands by the name of Jack Tedjai. He sent me two screen shots, taken with the new performance monitor panel (that comes with the SVC and Storwize V7000 6.2 code). He wrote:
I am working on a project to migrate VMware/SRM/DS5100 to SVC Stretch Cluster and one of the goals is to prevent using ISL (4Gbps) and VMware Hypervisor/HBA load during the migration. For the migration we are using VMware Storage vMotion. To minimize the impact of the migration on production, we tested VAAI for Storage vMotion and template deployment and it worked perfectly.
So whats this all about? Well one of the improvements provided with VAAI support is the ability to dramatically offload the I/O processing generated by performing a storage vMotion. Normally a storage vMotion requires an ESX server to issue lots of reads from the source datastore and lots of writes to the target datastore. So there is a lot of I/O flowing from ESX to the SVC, and then from the SVC to its backend disk. What you get is something that looks like the image below. In the top right graph we have traffic from SVC to ESX (host to volume traffic). In the bottom right graph we have traffic from the SVC to its backend disk controllers (DS5100 in this case). This is SVC to MDisk traffic.
When we add VAAI support to the SVC, we suddenly change the picture. Suddenly VMWare does not need to do any of the heavy lifting. There is almost no I/O between VMWare and the SVC (no host to SVC volume traffic) related to the vMotion. The SVC is still doing the work, but it is happening in the background without burning VMWare CPU cycles or HBA ports (in that there is still SVC to MDisk traffic).
This difference translates to: Faster vMotion times, far less SAN I/O and far less VMware CPU being used on this process.
So do VMware support this? They sure do! Check this link here. It currently shows something like this (taken on June 23, 2011):
So what are your next steps?
Upgrade your Storwize V7000 or SVC to version 6.2 code. Download details arehere.
Download and install the VAAI driver onto your ESX servers. You can get it from here. If your already using the XIV VAAI driver you need to upgrade from version 22.214.171.124 to version 1.2. There is an installation guide at the same link.
And the blog title? It means friendly greetings in Dutch. So to Jack (and to all of you), vriendelijke groeten and please keep sending me those screen captures.
I thought I would quickly check out two of the announced features of the 6.2 release: the new Performance Monitor panel and support for greater than 2 TiB MDisks. So on Sunday I got busy and upgraded my lab Storwize V7000 to version 126.96.36.199.
Remember that in nearly every aspect the firmware for the SVC and Storwize V7000 are functionally identical, so while I am showing you a Storwize V7000, it equally applies to an SVC.
Firstly I tried the performance monitor panel, and what better way to show you what I saw than on YouTube? This is my first YouTube video so please forgive me if its not slick. I started the performance monitor and captured two minutes of performance data using Camtasia Recorder. Because it is fairly boring to stare at graphs slowly moving right to left, I then sped it up eight times, and this is the result:
The video is shot in HD, so if what your seeing is grainy or hard to read, change the display to 720p or 1080p. Now if you want to see the performance monitor at its actual speed, here is the original normal speed video. Remember this is the same video as above, just slower. It can also be viewed in 720p.
The top right hand quadrant is volume throughput in MBps as well as current volume latency and current IOPS.
The bottom left hand quadrant is Interface throughput (FC, SAS and iSCSI).
The bottom right hand quadrant is MDisk throughput in MBps as well as current MDisk latency and current IOPS.
You will note that each metric has a large number (which is the current metric in real time) and a historical graph showing the previous five minutes. You can also change the display to show either node in the I/O group.
I found the monitor to be genuinely real time: the moment I changed something in the SAN (such as starting or stopping IOMeter or starting or stoping a Volume Mirror), I immediately saw a change.
Greater than 2 TB MDisk support
Next I logged onto my lab DS4800 and created two 3.3TiB volumes to present to the Storwize V7000. I chose this size because I had exactly 6.6 TiB worth of available free space on the DS4800 and I wanted to demonstrate multiple large MDisks. On versions 6.1 and below, the reported size of the MDisks would have been 2 TiB (as I discussedhere). Now that I am on release 6.2 with a supported backend controller, I can present larger MDisks. In the example below you can clearly see that the detected (and useable size) is 3.3 TiB per MDisk.
What controllers are supported for huge MDisks?
The supported controller list for large MDisks has been updated. The links for Storwize V7000 6.2 are here and for SVC here. If your backend controller is not on the list, then talk to your IBM Sales Representative about submitting a support request (known as an RPQ).
There was a time when 32 bits was considered a lot. A hell of a lot.
With 32 bits, you can create a hexadecimal number as big as 0xFFFFFFFE (presuming we reserve one bit). In decimal that's 4,294,967,295. Hey... imagine a bank account balance that big? If you use 32 bits to count out 512 byte sectors on a disk, you could have a disk that's 4,294,967,295 times 512... or 2,199,023,255,040 bytes! That's sounds huge, right?
Well... actually...no... that's 2 TiB, which most people would refer to as 2 Terabytes. Mmm.. Suddenly I am less impressed (still wouldn't mind that as a bank account though).
Now there are plenty of running Systems that still cannot work with a disk that is larger than 2 TiB. One of the more common is ESX. I am presuming this limitation is going to disappear, so Storage susbsystems need to be ready to create volumes that are larger than 2 TiB.
The good news is that with the May 2011 announcements, IBM is removing the last 2 TiB sizing limitations from its current storage products. There appears to have been some confusion in the past, so I thought I would go through and be clear where each product is at:
Firmware version 07.35.41.00 added support to create volumes larger than 2 TB. The maximum volume size is limited only by the size of the largest array you can create. This capability has been available for some time and hopefully you are already on a much higher release.
DS4000 and DS5000
Firmware version 07.10.22.00 added support to create volumes larger than 2 TB. The maximum volume size is limited only by the size of the largest array you can create. This capability has been available for some time and hopefully you are already on a much higher release.
DS8700 and DS8800
The DS8700 and DS8800 will support the creation of volumes larger than 2 TB once a code release in the 6.1 family has been installed. With this release you will be able to create a volume up to 16 TiB in size. The announcement letter for this capability is here.
The volume size on an XIV is limited only by the soft limit of the pool you are creating the volume in. This allows the possibility of a 161 TB volume.
SVC and Storwize V7000
These two products have two separate concepts:
Volumes (or VDisks) that hosts can see.
Managed disk (or MDisks) that are presented by external storage devices to be virtualized. Within this there are two further categories: - Internal MDisks created using the Storwize V7000 SAS disks. - External MDisks created by mapping volumes from external storage (such as from a DS4800).
SVC and Storwize V7000 Volumes (VDisks).
Prior to release 5.1 of the SVC firmware, the largest volume or VDisk that you could create using an SVC was 2 TiB in size. With the 5.1 release this was raised to 256 TiB, as announced here. When the Storwize V7000 was announced (with the 6.1 release) it also inherited the ability to create 256 TiB volumes.
Because the Storwize V7000 has its own internal disks, it can create RAID arrays. Each RAID array becomes one Mdisk. This means the largest MDisk we can create is limited only by the size of the largest disk (currently 2 TB), times the size of the largest array (16 disks). This means we can make arrays of over 18 TiB in size (using a 12 disk RAID6 array with 2 TB disks). Thus internally the Storwize V7000 supports giant MDisks. We can also present these giant MDisks to an SVC running 6.1 code and the SVC will be able to work with them.
SVC and Storwize V7000 External Managed Disks.
When presenting a volume to the SVC or Storwize V7000 to be virtualized into a pool (a managed disk group) we need to ensure two things are confirmed. Firstly you need to be on firmware version 6.2 as confirmed here for SVC and here for Storwize V7000. Secondly that the controller presenting the volume has to be approved to present a volume greater than 2 TiB. From an architectural point of view, MDisks can be up to 1 PB in size as confirmed here, where it says:
Capacity for an individual external managed disk
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.
I recommend you go to the supported hardware matrix and confirm if your controller is approved. The links for Storwize V7000 6.2 are here and for SVC here. As of this writing, the list has still not been updated, but I am reliably informed it will include the DS3000, DS4000, DS5000, DS8700 and DS8800. It will not initially include XIV, which will come later. Please also note the following:
Support for giant MDisks (greater than 2 TiB) is firmware controlled. If the controller (e.g. a DS5300) presenting a giant MDisk is not on the supported list for your SVC/Storwize V7000 firmware version, then only the first 2 TiB of that MDisk will be used.
If your already presenting a giant MDisk (and using just the first 2 TiB), then just upgrading your SVC/Storwize V7000 firmware won't make the extra space useable. You will need to remove the MDisk from the pool, then do an MDisk discovery and then add the MDisk back to the pool. All of this can of course be done without disruption, using the basic data migration features we have supported since 2003.
What to do in the meantime?
If your currently using an SVC or external MDisks with a Storwize V7000, then you need to work within the 2 TiB MDisk limit (except for Storwize V7000 behind SVC). The recommendation is a single volume per Array for performance reasons (so the disk heads don't have to keep jumping all over the disk to support consecutive extents on different parts of the disk). This can require careful planning. For instance using 7+P RAID5 Arrays of 450 GB drives makes an array that is over 3 TB. What to do in this example?
Divide it in half? (by creating two 1.5TB volumes)
Waste space? (a whole 1 TB)
Use smaller arrays? (a 4+P array of 450GB disks is 1.8 TB)
The answer is that where possible, create single volume arrays using 4+P or larger. If the disk size precludes that, then create multiple volumes per array and preferably split these volumes across different pools (MDisk groups).
Anything else to consider?
Well first up, will your Operating System support giant volumes? Googling produces so much old material that it becomes hard to nail down exact limits. For Microsoft, read this article here. For AIX check out this link. For ESX, check out this link.
Second of course is the consideration of size. File systems that utilize the space of giant volumes could potentially lead to giant timing issues. How long will it take to backup, defragment, index or restore a giant file system based on a giant volume (the restore part in particular)? Outside the scientific, video or geo-physics departments, are giant volumes becoming popular? Are they being held back by practical realities or plain fear? Would love to hear your experiences in the real world.
And a big thank you to Dennis Skinner, Chris Canto and Alexis Giral for their help with this post.
Hi Team! Just wanted to let everyone know that VisioCafe has been updated with IBM's latest official stencils for use with Microsoft Visio. These include all models of the Storwize V7000, including the newest models: The 2076-312 and 2076-324 (which have the dual port 10 Gbps iSCSI card).
Its a story told many times..... You order a new storage solution and the world is good. It's lovely, it's new and it offers mountains of new disk space.... but then... you.... fill it up! So its off to order some new disks. The order is in, the order is filled, the disks arrive. What next? How about we just stick them in? By just inserting the new disks, they will be made available to configure into RAID arrays from the Internal tab of the Physical Storage Group.
If the drives are showing as Unused, mark them to be Candidate. If they are already showing as Candidate (like most of the disks in my example below), then you are ready to hit the Configure Storage button and follow the guidance of the Wizard.
Of course maybe your enclosures are all full. In this case it's time to order another enclosure (remember we can have up to 10). Once you have racked the enclosure up and cabled the new enclosure to the correct SAS Chain, then use the Add Enclosure menu item shown below to kick off the configuration:
Storage IT offers up many choices, some of which provoke argument so heated, you could almost describe the adherents as religious. I think you might know the sort of arguments I am talking about:
File vs Block I/O
iSCSI vs Fibre Channel
CLI vs GUI
OK.... so maybe that last one isn't quite in the same league. But it is still fascinating to see the variation in usage patterns from sites where every command (of any description) is run via a command line interface (a CLI), to sites where the CLI is viewed with either great fear... or even greater distaste. There are those who view the CLI as... well... so 1970s.....
But the reality is that the CLI will always be with us for one principal reason: scripting. If you cannot script it, you cannot automate it (well actually thats not true, but stick with me here, I am on a roll). Every single major implementation I have ever done (whether it be SVC, XIV, DS8000), I have automated with scripting. I regularly use the concatenatecommand in Excel to build large numbers of commands that I can then run as a script.
So its pleasing to see that all of our products are working towards making the scripters life even easier. For example the XIV has offered a command log in the GUI for some time. I blogged about it here. You simply do a command once in the GUI and then consult the log to find the syntax, making scripting very easy:
With last years release of SVC 6.1 and Storwize V7000, we added this level of smarts to those two products as well. Now every command you run in the GUI will offer you the exact CLI command that was used under-the-covers to do this work. Simply toggle the details tab on the completion panel to see the command (or toggle it back to hide it!).
This weeks announcement of release 6.2 of the SVC and Storwize V7000 firmware, has brought in two more important usability improvements:
Now when logging onto the CLI using individual user-ids, you can logon using the actual user-id itself, rather than admin. This change has been a long time coming and removes the confusion generated by logging onto the GUI as sayanthony, but then logging into a matching CLI session as admin. Now you would logon to either interface as anthony.
Now when issuing CLI commands, you have the choice to drop the svctask and svcinfo headers. So instead of issuing the command svcinfo lsnode, you can issue the command lsnode. Both choices remain valid (so we don't break your existing scripts). Making this change is part of a bigger plan to move to a more common CLI.
And there are more improvements coming, so as always, watch this space....
.... and please... share with me... are you a GUI... or a CLI person? Whats your reasoning behind your choice?
The May 9 announcement that SVC and Storwize V7000 will support VAAI is very welcome news. The fundamental point is that the SVC and Storwize V7000 virtualise external storage. This means that the mountains of DS3000, DS4000, DS5000, AMS1000s, CX3s, etc, that are currently being virtualized behind these products, will inherit VAAI as soon as the virtualization layer supports it. This is yet another feature to add to the list of functions that IBM Storage virtualization can provide, such as: EasyTier; Thin Provisioning; multiple consistency groups; snapshots; remote mirroring; dynamic data relocation... the list goes on.
In addition we are releasing a plug-in for vCenter that enables VMware administrators to manage their SVC or Storwize V7000 from within the VMware management environment
Functions will include:
Volume provisioning and resizing
Displaying information about volumes
Viewing general information about Storwize V7000 and SVC systems
Receiving events and alerts for Storwize V7000 systems and SVC attached to vSphere
The Storwize V7000 and SVC plug-in for vCenter will also supports virtualized external disk systems
The plug-in will be available at no charge on June 30 (for Version 6.1 software) and July 31 (Version 6.2). Here is a sneak peak of what it will look like:
And to get an independent viewpoint have a read of Stephen Fosketts blog entry here:
With the announced release of DS8000 6.1 code, IBM has moved its three major storage systems to a common GUI platform. This makes me think of aircraft manufacturers who utilize a common cockpit design. For airlines, this is major drawcard when choosing aircraft models. It cuts down on training costs for your pilots. Except in storage IT, there is a major difference in motivation....
First and foremost, the design of the XIV GUI (that has inspired such dramatic change in IBMs other GUIs), was made possible, not by clever XIV GUI developers (don't get me wrong - they ARE clever), but by a remarkably user-friendly architecture. The XIV GUI is a miracle of ease-of-use for end users, made possible because first and foremost, by design, the XIV made it almost impossible to make it hard.
The good news for Storage administrators, is that unlike a jet aircraft, where a pilot needs to spend hundreds of hours in the cockpit before they are considered potentially competent, the XIV GUI can be picked up in minutes and lends itself very well to casual contact. You don't need to keep using it to stay competent.
The challenge for IBM was take more complex products, which require more user decisions, and make the usage experience just as easy. To add to this, the SVC and DS8000 GUIs were driven by WebSphere. Changing these GUIs would require a complete re-write to employ Java script.
First off the rank was the SVC and Storwize V7000. With the release last year of the SVC 6.1 update, the transformation was nothing less than remarkable. End user experience ruled every decision. The key again is that the user does not need to spend hundred of hours learning this GUI or re-learning it every time they go to perform a configuration task. Everything is in its right place. Its much more than an XIV-like GUI. Its a GUI that took the ease of use experience of the XIV and used that to inspire something just as remarkable.
With the release of the 6.1 update for the DS8000, we complete another fundamental step towards a truly common GUI. The DS8000 GUI has undergone a complete re-write. Essentially it has been rebuilt from the ground up. This highlights something fundamental: It confirms the DS8000 has a very strong roadmap.
As you can see from the image below, the transformation from the old design (to the left) to an ease of use model is complete:
In short it a common flight deck, that almost anyone can fly.