As a child I used to love the spot the difference cartoon in the Sunday paper.
You usually had 10 differences to circle... and I could never find the last one.
Look carefully at the two machines below. Can you spot the differences?
It's an XIV Generation 2 on the left and an XIV Gen3 on the right. For me it's the side panels that give it away (of course the Gen3 printed on the front panel helps).
The big change is the rack that the product uses:
The Generation 2 IBM XIV uses an APC AR3100 (also called a NetShelter).
The Gen3 IBM XIV uses an IBM T42 rack .
So why the change?
Three good reasons:
- Using the T42 lets us offer an optional Ruggedized Rack Feature, providing additional hardware that reinforces the rack and anchors it to the floor. This hardware is designed primarily for use in locations where earthquakes are a concern. As you may be aware there have been some major earthquakes around the world recently (with tragic results). Clearly our clients in earthquake prone areas need us to provide a model that can be hardened for use in earthquake zones.
- Using the IBM T42 rack lets us offer an optional IBM Rear Door Heat Exchanger, which is an effective way to assist your Air Conditioning system in keeping your datacenter cool. It removes heat generated by the modules in the XIV before the heat enters the room. Inside the door of the heat exchanger are sealed tubes filled with circulating chilled water. Its unique design uses standard fittings and couplings and because there are no moving or electrical parts, helps increase reliability. It can be opened like any rear cover, so serviceability of an XIV Gen3 fitted with a heat exchanger is as easy as the standard air cooled version.
- Using the T42 lets us offer a rack which matches our standard rack offering. It's a sturdier rack and travels far better over both short and long distances. To put it simply: Its a more substantial rack.
One nice feature that both products offer is feature code 0200 (weight reduction for shipping). When ordered it tells the plant to ship the XIV in a weight reduced format. For Generation 2 this means the rack that IBM ship will weigh around 300kg (unpacked from the shipping crate). The rest of the machine (the modules and the UPSs) are shipped in separate boxes. The XIV Gen3 will weigh more as less hardware is removed, although I am still confirming what that will be. The advantage is that you can user lower rated goods lifts and move the XIV across floors that are not rated for the maximum weight. You just have to ensure that planned location of the XIV can support the final weight. And the really nice thing? This feature is available at no extra cost.
(edited 27/7/11 to clarify feature code 0200 will be different for XIV Gen3).
Tiny toast or giant hand? It's not an optical illusion.
When IBM offered 2 TB drives on the XIV, I thought I was seeing an illusion: the overall power consumption had dropped (not risen). Guess what? With XIV Gen3, power consumption drops yet again.
Using worst case power consumption numbers I can see the following maximums:
180 drive XIV Generation 2 with 1 TB drives(79TB usable): 8.4 kVA
180 drive XIV Generation 2 with 2 TB drives(161TB usable): 7.1 kVA
180 drive XIV GEN3 with 2 TB drives (161TB usable): 6.7 kVA
So with every update to the product, power consumption has kept dropping. Lets compare the first Generation 2 to the XIV Gen3:
- Usable capacity up by 103%
- Power consumption down by 20%
- Heat output down by 20%
- Noise level down by 33%
- Performance up by up to 400%
How about this as a measure: lets compare the Microsoft Exchange ESRP report for XIV Generation 2 found here with the newly released report for XIV Gen3 found here. While the ESRP program is not a bench marking program, the results are truly impressive.
For more information on Power Consumption and the cost of running XIV, check out these white papers:
Driving down power consumption with the IBM XIV Storage System
Comparing Cost Structures for IBM XIV and EMC V-Max Systems
If your an existing XIV user and your interested in measuring your current power consumption, check out my tutorial here. If you want the spreadsheet shown in the video, drop me a comment (your email address will appear in my comments dashboard but will not be visible to anyone else).
(image care of www.interestingfacts.org)
If you have Brocade fibre channel switches in your SAN, you need to be aware of the method which Brocade use to manage firmware releases. All 4 and 8 Gbps Brocade SAN switches use a Linux based firmware which Brocade call Fabric Operating System or FOS. Updates to this firmware are released in families. This started with version 4, then version 5 and then version 6. Each family has had a series of updates. Version 5.0.x went to 5.1.x, 5.2.x and 5.3.x. Version 6.0.x went to 6.1.x, 6.2.x, 6.3.x and currently 6.4.x.
The good news is that you can non-disruptively update firmware on Brocade switches:
So you can move to higher releases without an outage (note there may be exceptions to this, always read your release notes to be sure).
However you need to be aware of a rule regarding the from and to versions. Since FOS 6.0.0 Brocade have a one-release migration policy. This allows more reliable and robust migrations for customers. By having fewer major changes in internal databases, configurations, and subsystems, the system is able to perform the upgrade more efficiently, taking less time and ensuring a truly seamless and non-disruptive process for the fabric. The one-release migration policy also reduces the large number of upgrade/downgrade permutations that must be tested, allowing Brocade to spend more effort ensuring the supported migration paths are thoroughly and completely verified.
Disruptive upgrades are allowed, but only for a two-level migration (for instance from 6.2 to 6.4, skipping 6.3).
So why should you care?
Well your upgrade philosophy may be: If it aint broke, don't fix it. Or you may have the policy: We do fix on fail, apart from that, we don't update firmware. Much as I can understand the attraction of this, when you finally do perform an update, you may find yourself having to do many upgrades or kangaroo hops as I call them.
Lets document some possible kangaroo hops from an old to what is currently the newest release:
4.1.2f → 4.4.0e → 5.1.1b → 5.3.2c → 6.0.1a → 6.1.2c → 6.2.2b → 6.3.1a → 6.4.1b
As you can see from the steps above, you may have a very long change window if you are choosing to not perform updates on a regular basis. There are also lots of caveats and restrictions based on the hardware of the switch you are running. It is very important you consult the release notes that can be found at the following links:
IBM SAN b-type Firmware Version 4.x Release Notes
IBM SAN b-type Firmware Version 5.x Release Notes
IBM SAN b-type Firmware Version 6.x Release Notes
If your looking for a recommended version to install, the Version 6 release notes page above gives advice on this. Currently it says: IBM recommends that Open System customers that currently use FOS 6.1 or earlier limit migration to FOS 6.2.2b, 6.3.0d, 6.3.1a, or 6.4.0c or later only.
Please notes that the release notes published on IBM's website are Brocade documents. While this is good (since it means they come straight from the manufacturer), you need to decode which Brocade hardware model is which IBM machine type. The version 6 URL above also contains a product cross-reference which lets you convert Brocade product names to IBM product names. I am also working on a post which will help you with this, so watch this space.
Edit July 19, 2011. The original post suggests 6.4.0a as a go-to level, this has since been removed. It has also been pointed out to me that some blade type switches may not be capable of hot-code-load (HCL). You need to check your vendor release notes to be certain.
Just a quick post as I am leaving Singapore to return to Melbourne, I thought I would share two more photos with you.
No trip to Singapore is complete without a visit to the Merlion, the mythical creature who acts as a mascot for Singapore. The fish body represents Singapore's origin as a fishing village when it was called Temasek, which means "sea town" in Javanese. The lion head represents Singapore's original name — Singapura — meaning "lion city" or "kota singa" (thanks to Wikipedia for the text).
Here is a night view across to the Marina Bay Sands, the integrated resort fronting Marina Bay in Singapore. Developed by Las Vegas Sands, it is billed as the world's most expensive standalone casino property at S$8 billion, including cost of the prime land. The remarkable building on the left hand side is the ArtScience Museum. The architecture is said be a form reminiscent of a lotus flower (again, thanks to Wikipedia for the information).
I would love to return to Singapore soon in the form of a tourist, it is an amazing city full of vibrant energy. With Singapore National Day coming up on August 9 and the Formula One in September, there will certainly be plenty for visitors to see and do.
If you want to see more of my photographs, feel free to visit my Flikr account.
I am in Singapore this week running a teach the teacher seminar on Storwize V7000. We are creating more instructors as demand for courses on the product continues to increase.
I am fortunate to be staying at the Marina Bay Sands Resort, which is one of the most mind blowing facilities I have ever seen. Check out this view of the Infinity Pool from the Skydeck up on the 57th floor.
Out my hotel window they are building a huge new facility known as Gardens by the Bay, but frankly I think it looks more a spaceport!
According to Wikipedia these are Supertrees: tree-like structures that dominate the Gardens landscape with heights that range between 25 and 50 metres. They are vertical gardens that perform a multitude of functions, which include planting, shading and working as environmental engines for the gardens.
But doesn't this look like two huge crashed spaceships?
Actually they will be giant conservatories, again according to Wikipedia they are the Flower Dome and the Cloud Forest.
They certainly know how to think big in Singapore!
XIV Gen 3 modules are built on a new generation of Intel microprocessors based on the Nehalemmicro-architecture. Nehalem is the most profound architecture change that Intel has introduced in the 21st century. Some of the key changes and their benefits are:
- Integrated memory controller: The memory controller now sits on the same silicon die as the processors. It runs at the same clock-speed as the processors instead of at the lower speed of an external front-side bus. This dramatically improves memory performance and therefore overall system performance.
- No need for buffered memory: Previously, buffered memory was required to improve the performance of the memory sub-system. Buffered memory is relatively expensive and energy hungry. With the faster Nehalem integrated memory controller, the system can deliver improved performance without needing buffered memory, saving cost as well as energy. XIV Gen 3 will be faster and cooler at the same time using unbuffered DDR3 RAM. And since the memory is cheaper, we can put more in.
- Increased memory capacity: Nehalem supports more memory chips at higher speeds. In XIV Gen 3 this translates into a 50 to 200% increase in system cache, significantly lifting the performance headroom of an already stellar performer.
- No more front-side bus: Memory, second CPU package and peripherals no longer have to share and wait on a single bus to communicate. The connections are now direct or switched, enabling increased parallelism and the ability to do more work simultaneously.
- PCI Express Generation 2: The I/O sub-system doubles in speed with the introduction of PCI Gen-2. This enables faster network and I/O adapters for XIV Gen 3:
- 8 Gbps fibre-channel host connections.
- More iSCSI host connections (including at the entry configuration of 6 modules)
- Multi-channel, low latency infiniband as the inter-module connection.
- A slot for solid state disk (SSD).
- Better systems management instrumentation: The system supports increased monitors for sub-systems for more sophisticated self diagnostics and healing. Remote management capability has also been improved.
Furthermore, the new motherboards have additional expansion capacity (more processors, memory and I/O) that can be utilized to deliver future improvements in performance and increased software functionality.
XIV Gen 3 is not the first storage sub-system to adopt the Nehalem architecture. Some of our competitors (EMC and NetApp for example) have already done so with their dual-controller arrays. XIV Gen 3 takes the Nehalem architecture advantage forward, not twice, but six to fifteen times.
Many thanks to Patrick Lee for writing up this great summation.
Today IBM is announcing a new member of the XIV family, which we are calling XIV Gen3. I thought I would give a brief history of how we got here before I get too carried away with details.
What was Generation 1 of the XIV?
In 2002 an Israeli startup began work on a revolutionary new grid storage architecture. They devoted three years to developing this unique architecture that they called XIV.
They delivered their first system to a customer in 2005. Their product was called Nextra(does it look familiar?).
What was Generation 2 of the XIV?
In December 2007, the IBM Corporation acquired XIV, renaming the product the IBM XIV Storage System. The first IBM version of the product was launched publicly on September 8, 2008. Unofficially within IBM we refer to this as Generation 2 of the XIV.
The differences between Gen1 and Gen2 were not architectural, they were mainly physical. We introduced new disks, new controllers, new interconnects, improved management, additional software functions.
As anyone who has read my blog knows, I have been working on the Generation 2 XIV since the day IBM began planning to release it as an IBM product. So it is very exciting to be able to share with you that we are now releasing Generation 3 of the IBM XIV Storage System.
What is Generation 3 of the XIV?
Generation 3 of the XIV is a new member of the XIV family, that will be an alternative to the Generation 2 XIVs we currently offer. It does not change the fundamental architecture, that remains the same. What it does do is bring significant updates to almost every part of the XIV, including:
- Introducing Infiniband interrconnections between the modules.
- Upgrading the modules to add 2.4 Ghz quad core Nehalem CPUs; new DDR3 RAM and PCI Gen 2 (using 8x slots that can operate at 40 Gbps) .
- Upgrading the host HBAs to operate at 8 Gbps.
- Upgrading the SAS adapter.
- Upgrading the disks to native SAS.
- A New rack.
- A new dedicated SSD slot (per module) for future SSD upgrades.
- Enhancements to the GUI plus a native Mac OS version.
I will be blogging about each of these changes over the coming days and weeks as we move to general availability date, so watch this space. In the meantime, why not visit the official XIV page here and check out the ITG Report linked there.
I have received this question several times, so it's clearly something people are interested in.
The Storwize V7000 has two controllers known as node canisters. It's an active/active storage controller, in that both node canisters are processing I/O at any time and any volume can be happily accessed via either node canister.
The question then gets asked: what happens if a node canister fails and can I test this? The answer to the question of failure is that the second node canister will handle all the I/O on its own. Your host multipathing driver will switch to the remaining paths and life will go on. We know this works because doing a firmware upgrade takes one node canister offline at a time, so if you have already done a firmware update, then you have already tested node canister fail over. But what if you want to test this discretely? There are four ways:
- Walk up to the machine and physically pull out a node canister. This is a bit extreme and is NOT recommended.
- Power off a node canister using the CLI (using the satask stopnode command). This will work for the purposes of testing node failure, but the only way to power on the node canister is to pull it out and reinsert it. This is again a bit extreme and is not recommended. This is also different to an SVC, since each SVC has it's own power on/off button.
- Use the CLI to remove one node from the I/O group (using the svctask rmnode command). This works on an SVC because the nodes are physically separate. On a Storwize V7000 the nodes live in the same enclosure and a candidate node will immediately be added back to the cluster, so as a test this is not that helpful.
- Place one node into service state and leave it there will you check all your hosts. This is my recommended method.
First up this test assumes there is NOTHING else wrong with your Storwize V7000. We are not testing multiple failure here. You need to confirm the Recommend Actions panel as shown below, contains no items. If there are errors listed, fix them first.
Once we are certain our Storwize V7000 is clean and ready for test, we need to connect via the Service Assistant Web GUI. If you have not set up access to the service assistant, please read this blog post first.
So what's the process?
Firstly logon to the service assistant on node 1 and place node 2 into service state. I chose node 2 because normally node 1 is the configuration node (the node that owns the cluster IP address). You need to confirm your connected to node 1 (check at top right) and select node 2 (from the Change Node menu) and then choose to Enter Service State from the drop down and hit GO.
You will get this message confirming your placing node 2 into service state. If it looks correct, select OK.
The GUI will pause on this screen for a short period. Wait for the OK button to un-grey.
You will eventually get to this with Node 1 Active and Node 2 in Service.
Node 2 is now offline. Go and confirm that everything is working as desired on your hosts (half your paths will be offline but your hosts should still be able to access the Storwize V7000 via the other node canister).
When your host checking is complete, you can use the same drop down to Exit Service State on node2 and select GO.
You will get a pop up window to confirm your selection. If the window looks correct, select OK.
You will get the following panel. You will need to wait for the OK button to become available (to un-grey).
Provided both nodes now show as Active, your test is now complete.
I think this picture speaks for itself: Three XIVs. Three cities. Three way iSCSI.
All the mirror connections were created in seconds using drag and drop in the XIV GUI.
I can now take a volume in one city and mirror it to another.
And yes.... IBM Australia now has a demo XIV in each of three major cities, so why not drop by and have a look?
Over at SearchStorage.com.AU they recently published an article entitled Six reasons to adopt storage virtualisation. You can find the article here. The six given reasons are:
- Storage virtualisation reduces complexity
- Storage virtualisation makes it easier to allocate storage
- Better disaster recovery
- Better tiered storage
- Virtual storage improves server virtualisation
- Virtual storage lets you take advantage of advanced virtualisation features
Its a well written article and I agree with every point. But one could be forgiven for reading the article and thinking that either storage virtualisation is new, or that storage virtualisation is something you might consider AFTER doing server virtualisation. Both of which are not true.
IBM embraced storage virtualisation in June 2003 when we announced our SAN Volume Controller (the IBM SVC). I even found a CNET.com article from way back then. You can find it here (the image below is a screen capture of that CNET website).
IBM's SVC product has been enhanced repeatedly since 2003 with an enormous list of supported host servers and backend storage controllers. We have added new functions every year including Easy Tier, split cluster, VAAI, an enhanced GUI and a new form factor for the SVC code in the form of the Storwize V7000.
So let me give you a seventh reason for adopting storage virtualisation: A vendor who has shown genuine support for this technology. No vendor has embraced storage virtualisation with more enthusiasm than IBM. We have an industry leading solution with phenomenalSPC benchmarks, an enormous number of case studies and an architecture that does not lock you in. Indeed it is an architecture that can grow as you grow and that can be upgraded without disruption.
So please consider storage virtualisation from IBM, using either the SVC or the Storwize V7000. If your in Australia, we have demo centers dotted around the country. Many of our Business Partners can also demonstrate IBM storage virtualisation using their own Storwize V7000s. If your in Melbourne feel free to give me a call and schedule a time to drop into Southgate.
Tivoli Pulse is coming to Melbourne July 27 and 28, 2011 at the Crown Promenade in Melbourne.
How many chances do you get to listen to the following speakers in one place?
- Nigel Phair, Director, Centre for Internet Safety, University of Canberra
- Steve Van Aperen, Human Lie Detector and Director of SVA Training and Australian Polygraph Services
- Laura Guio, Vice President, Storage Sales, STG Growth Markets, IBM
- Joao Perez, Vice President of Worldwide Tivoli Software Sales, IBM USA
- Jamie Thomas, Vice President Tivoli Strategy and Development, IBM USA
- Glenn Wightwick, Director, IBM R&D Australia
There are 12 customer case studies presented mainly by customers. There are over 70 sessions in eight streams, expert keynote speakers, case studies and presentations. Pulse 2011 provides the tools to help advance your infrastructure goals.
Registration is free so what excuse do you have? Registration is free so what excuse do you have? Find out more here. You can enroll here. The agenda front page is here. The detailed agenda is here.
I will presenting on day two. talking about Virtualization and Storwize V7000 so maybe I will see you there!
I have an admission: I am a bit of an Apple fanboy. Well actually not a full on Apple fanboy, I have an iPhone and an iPad but I don't have a MacBook (although if IBM start offering a cash payment instead of giving laptops to mobile employees, that might change). But not everything is perfect in the land of Apple. Let me give you an example, one that I routinely find people are not aware of (apologies if you learnt all of this months ago).
The picture below appears to show three identical Apple charger packs (with Australian pins). You may have a similar collection. But are they all identical? Sadly not.
Only those with very good eyes can spot the difference by reading the rather pale decal on the bottom section of each charger. The text is so small and faint, I struggled to take a decent picture, but here is my sad attempt for one of them (they are all different):
So how are my three power adapters different?
- The first is marked as a 10 Watt USB Power Adapter (it came with an iPad). Its output is amusingly marked as 5.1 Volts DC at 2.1 Amps, which suggests 10.7 Watts.
- The second one is marked as a 5 Watt USB Power Adapter (it came with my iPhone). Its output is marked as 5 Volts DC at 1 Amp, which is indeed 5 Watts.
- The third is marked as an iPod USB Power Adapter. No stated wattage, but its output is marked as 5 Volts DC at 1 Amp, which again suggest 5 Watts. So perhaps my 5 Watt adapter and my iPod adapter are actually the same.
The big question that comes up: Are they interchangeable? The answer: Yes but with caveats.
If you have an iPad you should use the 10W adapter. If you use the 5W adapter it will still charge but at a much slower rate. Apple confirm this here where they state:
iPad will also charge, although more slowly, when attached to an iPhone Power Adapter (by which they mean a 5 Watt adapter).
If you have an iPhone or an iPod can you use the 10W adapter? The answer is yes! It will recharge with no ill effects. Apple confirm this here, where they state:
While designed for use with the iPad, you can use the iPad 10W USB Power Adapter to charge all iPhone and iPod models.
So I am putting my 5 Watt and iPod adapter in the cupboard and using the 10 Watt adapter exclusively. If you have an iPad and finds it's recharging slowly, you may be using an older 5 Watt adapter (but you may need a magnifying glass to spot the difference!).
My suggestion to Apple? A few more cents worth of ink please, to make things more obvious.
To close, on my first Apple focused blog entry, let me pose a question:
Will it blend?
I read a great blog post recently on Written Impact that talked about how to create effective presentations. It's well worth reading and can be found here. They describe several different formats that will help you develop interesting presentations, ones that don't put your subjects to sleep.
Talking of presenting, I recently presented at the IBM Power and Storage Symposium in Manila. It was a great event and was very well attended. We even had cake to celebrate IBM's 100th birthday.
There are two IBM Symposiums coming up in Australia that I would love for you to attend:
The next IBM Power Systems Symposium will be held in Sydney running from August 16 to 19, 2011. We are currently finalizing the agenda on this one and while this symposium is dedicated mainly to IBM Power Systems... I will be attending and presenting on storage related topics. To check out the details and enroll, please head over to here.
An IBM Storage Symposium will be held in Melbourne running from November 15 to 17, 2011. The agenda is still being set, so if you have ideas about what you would like to see, please let me know. To check out the details and enroll, please head over to here. And yes! I will be attending and I will be presenting.
The Storwize V7000 and SVC release 6.1 introduced a new WEB GUI interface to assist with service issues, known as the Service Assistant. The Service Assistant interface is a browser-based GUI that is used to service your nodes. Much of what you traditionally did with the SVC front panel can all be done using the Service Assistant GUI. You can see a screen capture of the Service Assistant below:
While I would like to be optimistic and hope that you will never have to use the Service Assistant, you should always ensure your toolkit is equipped with every possible tool. I say this because one thing I have noted is that the majority of installs are not configuring the Service Assistant IP addresses. This is particularly apparent as clients upgrade their SVC clusters to release 6.1.
By default on Storwize V7000, the Service Assistant is accessible on IP addresseshttps://192.168.70.121 for node 1 and https://192.168.70.122 for node 2 (don't try and point your browser at them right now, as your network routing won't work - you would need to set your laptop IP address to the same subnet and be on the same switch. Details to do that are here). For SVC there are no default IP addresses, although we traditionally asked the client to configure one service address per cluster. The best thing for you to do is approach your network admin and ask for two more IP addresses for each Storwize V7000 and/or SVC I/O group. Once you have these two extra IP addresses, record them somewhere and then set them using the normal GUI.
Its an easy five step process as shown in the screen capture below. Go to the Configuration group and then choose Network (step 1). From there select Service IP addresses (step 2) and the relevant node canister (step 3). Choose port one or port two (step 4) and then set the IP address, mask and gateway (step 5).
You can also set them using CLI (replace the word panelname with the panel name of each node, which you can get using the svcinfo lsnode command).
satask chserviceip -serviceip 10.10.10.10 -gw 10.10.10.1 -mask 255.255.255.0 panelname
If you forget these IP addresses, you can reset them using the same CLI commands or using the Initialization tool as documented here.
Finally having set the IP addresses, visit the service assistant by pointing your browser at each address. This is just to confirm you can access it. You logon with your Superuser password. With the process complete, ensure the IP addresses are clearly documented and filed away. So now if requested, you will be able to perform recovery tasks (in the unlikely chance they are needed). If for some reason your browser keeps bringing you to the normal GUI rather than the Service Assistance GUI, just add /service to the URL, e.g. browse to https://10.10.10.10/service rather than https://10.10.10.10.
So what should you do now?
If your an SVC customer on SVC code version v5 and below, please get two IP addresses allocated for each SVC I/O group, so you can set them the moment you upgrade to V6. Do this once the upgrade is complete.
If your an existing Storwize V7000 client or an SVC client already on V6.1 or V6.2 code, then hopefully you should already have set the service IP addresses. If not, please do so and test them.
I recently got a great email from an IBMer in the Netherlands by the name of Jack Tedjai. He sent me two screen shots, taken with the new performance monitor panel (that comes with the SVC and Storwize V7000 6.2 code). He wrote:
I am working on a project to migrate VMware/SRM/DS5100 to SVC Stretch Cluster and one of the goals is to prevent using ISL (4Gbps) and VMware Hypervisor/HBA load during the migration. For the migration we are using VMware Storage vMotion. To minimize the impact of the migration on production, we tested VAAI for Storage vMotion and template deployment and it worked perfectly.
So whats this all about? Well one of the improvements provided with VAAI support is the ability to dramatically offload the I/O processing generated by performing a storage vMotion. Normally a storage vMotion requires an ESX server to issue lots of reads from the source datastore and lots of writes to the target datastore. So there is a lot of I/O flowing from ESX to the SVC, and then from the SVC to its backend disk. What you get is something that looks like the image below. In the top right graph we have traffic from SVC to ESX (host to volume traffic). In the bottom right graph we have traffic from the SVC to its backend disk controllers (DS5100 in this case). This is SVC to MDisk traffic.
When we add VAAI support to the SVC, we suddenly change the picture. Suddenly VMWare does not need to do any of the heavy lifting. There is almost no I/O between VMWare and the SVC (no host to SVC volume traffic) related to the vMotion. The SVC is still doing the work, but it is happening in the background without burning VMWare CPU cycles or HBA ports (in that there is still SVC to MDisk traffic).
This difference translates to: Faster vMotion times, far less SAN I/O and far less VMware CPU being used on this process.
So do VMware support this? They sure do! Check this link here. It currently shows something like this (taken on June 23, 2011):
So what are your next steps?
- Upgrade your Storwize V7000 or SVC to version 6.2 code. Download details arehere.
- Download and install the VAAI driver onto your ESX servers. You can get it from here. If your already using the XIV VAAI driver you need to upgrade from version 18.104.22.168 to version 1.2. There is an installation guide at the same link.
And the blog title? It means friendly greetings in Dutch. So to Jack (and to all of you), vriendelijke groeten and please keep sending me those screen captures.