Anthony's Blog: Using System Storage - An Aussie Storage Blog
(edited 24/5/2011 --> removed old Visio Stencils link).
Remember you can also find my XIV stencils here:
Requests for Visio stencils are one of the most common comments I receive.
More are coming so your requests are being heard!
Over on my Wordpress blog, I have posted an entry on migrating a Linux RHEL host from EMC to XIV.
If that subject interests you, check out my article here:
The XIV 10.2.4 release notes report performance improvements that are worth investigating. Two of the reported improvements listed are:
I visited a client running 10.2.4 to see if these could be detected in the XIV performance statistics. In this clients case, the upgrade occurred on Feb 14. First up I wanted to show that in the period I am examining there was no major variation in write IO. In other words, before and after the code load, I wanted to confirm the client performed the same level of IO.
Having confirmed that the write IOPS did not vary over the period in question, did the latency change? Here we have some good news. Firstly the latency for Write Hits improved (slightly). A write hit is a write into a 1MB partition that already has some data in cache. It is faster than a write miss because some of the address allocation work has already been done. Write hits and misses both hit cache as I explained here. You can see a change on Feb 14 (when the code was updated):
I then looked at the latency for write misses. Again the latency dropped. This suggests that cache operations in general are being handled faster.
I then started thinking.... are we getting more write cache hits? The answer was YES! This is curious because the client normally does not have much control over where they actual write data to... Clearly the XIV firmware is managing the write cache in a more efficient manner. This is good not only because write hits normally have lower latency than write misses, but also because a write hit can save us destaging a block of data to disk. This is because a write hit could involve over-writing data that had not yet been destaged to disk. So two writes to the same LBA would only result in one write to backend disk.
So in conclusion, the upgrade to 10.2.4 code resulted in a measurable improvement in write IO performance at a real world client. Nice!
Its easy to make a fool of yourself.
Its not hard to do.
So what was the assumption this time?
One of our business partners sold a client two new XIVs and 4 new IBM SAN40Bs (40 port fibre channel switches). So far so good. When you order the SAN switches you have a choice of ordering 4 Gbps capable SFPs (SFPs are the fibre optic sub assemblies that you plug your cables into) or 8 Gbps capable SFPs. There was a time when the 8 Gbps SFPs were much more expensive than the 4 Gbps, but today they are about 75% of the price of the 4 Gbps. So it makes sense to buy the faster SFPs. But you need to ensure that all the HBAs at the client site are at least 2 Gbps capable, because 8 Gbps SFPs are tri-rate and can only go at 2, 4 or 8 Gbps. Sure enough an assumption was made that this was not an issue... but it was. The client has WDMs that run at 1 Gbps and upgrading those WDMs would be a significant expense.
So I got to thinking... could I force the SFP to 1 Gbps?
If I display the 8 Gbps SFP it reports it is capable of 200, 400, 800 MBps which is code for 2, 4 or 8 Gbps.
But maybe I could force it to 1 Gbps?
So what to do? We could not just move the old SFPs into the new switch, as the new 8 Gbps capable Brocade switches only accept Brocade approved SFPs. The only solution was to make it right and swap four of the Brocade 8 Gbps SFPs with Brocade 4 Gbps SFPs. Fortunately as we needed only four, I was able to swap them with little expense or hassle (I contacted our local Brocade rep who happily helped us out).
The end point was a happy client and a lesson re-learnt..... 1 into 8 does not go.
I am curious though... is there much 1 Gbps gear still out there? Is this a common issue?
Over on Wordpress, I have just published an article on SNMP and XIV.
Given some funky formatting, I have decided not to paste it into this blog.
If your interested in monitoring an XIV with SNMP, please head over to here:
A friend of mine sent me a direct message on Twitter that pointed out something interesting.... A blog post I had written on SDDPCM had been copied word for word by another site. A little bit of googling revealed that in fact it had been picked up by two sites. Here is the original, and the copies are here and here.
What bothered me was not that the content was copied without any obvious (well, obvious to me) attempt to acknowledge the original author. In fact in both cases, the copied text included a link to another blog entry I had written, so an alert reader would pick up that the content had come from someone else (still... a little acknowledgement doesn't hurt). To begin with, I was also not concerned with the re-use of my work. After all, I am writing this to be helpful, so if you think something I have written is helpful... and you spread the word... that work is even more helpful (but hey thats what Twitter is for... right?). But then it occurred to me....by copying the article without a link back to the original source (mine), if I find a mistake is made and I update my blog post, those corrections will not flow to the clones. So this potentially undermines my efforts to be helpful.
I also noticed that in each case, the clones had advertisments by Google. Does this mean Google and/or these other bloggers, are actually making money from copying my content?Hmmm... acknowledgement is one thing... a cheque is even nicer.
Still... message to Anthony... if you push content into the public domain you have to be prepared for this.
After tweeting about this, I did learn it is possible to insert sentences into your content that you could then monitor for with Google Alerts. I don't plan to do this myself, but its certainly worth being aware of. This of course also presumes the cloners don't detect these sentences and delete them.
I am very curious to know of similar experiences. Has this happened to you? Did you do anything about it? Were you happy with the result?
When IBM first released the Storwize V7000, we announced it was capable of supporting ten enclosures, but would on initial release support only five. We stated that this restriction would be lifted in Q1.
The good news is that this restriction is indeed now lifted by the release of Storwize V7000 software version 22.214.171.124, which is available for download from here:
You should also check out this link:
This new level also contains an additional enhancement which I think users will really like, called Critical Fix Notification. The new Critical Fix Notification function enables IBM to warn Storwize V7000 and SVC users if we discover a critical issue in the level of code that they are using. The system will warn users when they log on to the GUI using an internet connected web browser. It works only if the browser being used to connect to the Storwize V7000 or SVC, also has access to the Internet. (The Storwize V7000 and SVC systems themselves do not need to be connected to the Internet.) The function cannot be disabled (which is a good thing) and each time we display a warning, it must be acknowledged (with the option to not warn the user again for that issue).
As I blogged previously, VAAI support for XIV has two dependencies:
Both of these things are very close to release....
And what is the affect?
VAAI dramatically reduces the amount of work that the vSphere 4.1 server needs to do to get things done.
The XIV implementation of VAAI provides the three fundamentals of VAAI:
To test VAAI with XIV, I did two things: a VMDK migration (a Storage Vmotion) and VMDK cloning. I used the vSphere client to time how long the operation took and XIV Top to see how much IO was being generated by the vSphere server. Now please understand, these numbers and timings are based on a lab environment. The speed and peaks will vary from client to client and install to install.
Firstly the migration: I performed a migration of a VMDK from one data store to another. The migration without VAAI took 42 seconds as can be seen from the screen capture below:
The migration generated a peak of 135 MBps of traffic being written to the target volume as can be seen from XIV Top:
I then turned on VAAI and did the same migration. I won't document the process to install the VAAI driver, as it will be different when the certified version is released. However after the driver is installed, I could turn VAAI on and off by toggling these settings from 0 1 and back again:
I we did another VMDK migration with VAAI enabled. This time the migration took 19 seconds (as opposed to 42 seconds), so an immediate improvement occurred.
When I checked XIV Top, there was no IO at all! In other words the vMotion was done with no apparent load on the vSphere HBAs or the SAN. I feel silly showing this screen capture, but this is what I saw.... nothing.
I then did a VMDK clone. The Data store was on XIV, VAAI was not enabled. There was no other IO running on the ESX server. The clone took 40 seconds (as reported by vCenter):
The clone generated a peak of 2 MBps for around 20 seconds (as reported by XIV Top). Almost no fibre channel IO was thus generated by the clone.
As I have blogged before, I will be repeating this whole exercise once I have real live customers running this configuration, so expect further updates.
Things have been pretty revolting lately, and I am not talking about Tunisia or Egypt or Libya (thought actually they could equally apply to my story).
What I am talking about is mother nature, and she is pretty angry with us right now.
In the last few months Australia and New Zealand have seen massive floods in Queensland, Victoria and Western Australia, destructive cyclones hitting Queensland and Western Australia, ferocious bush fires in Western Australia and most recently, a massive earthquake in New Zealand.
The personal loss of life and of property have been shocking and tragic. Each of these events have reminded me how quickly everything we hold dear can be taken away in an instant... by an event over which you have no control.
Which leads me to storage clouds....
If something can be stored electronically, then it can be stored in a cloud. A cloud that is hopefully well backed up, and far away from your own personal location. And no this is not an advertisement... its a suggestion....
Given the events of the last few months, I have started using a storage cloud provider to protect my photos, my music and my insurance information.
I looked for cloud storage providers who:
I considered the following uses:
Let me give you an example of a document I would never want to have to replace....
My son is practicing to get his drivers license. In Victoria you need 120 hours of driving experience recorded in a log book. This log book needs to be filled in every time he drives the car. If the log book is lost... those 120 hours would need to be driven again. I cannot tell you how hard it is to find 120 hours of driving opportunities (and I heartily support the 120 hours scheme!). Even if you did feel inclined to create fake entries to recreate the book (which is illegal), frankly creating 120 hours of fake driving log entires would be very hard work. To make things worse... where I am storing this booklet? In the car of course (which is the most convenient place to store it). So what happens if the car is stolen? There goes the logbook.... So the plan I work on is that every time a page is filled up, I scan that page as an image stored on my laptop. The image goes into a folder that is automatically backed up to the cloud. Yes it does depend on my being diligent, but the actual process of copying the file somewhere else is automatic. Now I have 3 copies... the original, the scanned image on my laptop and a third (automatically created) copy way off in the cloud somewhere.
As for personal recommendations:
1) Get 2 GB free on Dropbox. This is a great point solution and a great way to dip your toes in.
Have there been issues with storage cloud providers? A quick search reveals stories like: Flikr deleted a users data and Carbonite lost data due to hardware failure. Still... I have no plans to store my ONLY copy of data in the cloud. For me its a backup medium... not a primary storage location.
Are you convinced?
Oh... and my son? He is on 89 driving hours... 31 to go....
anthonyv 2000004B9K Tags:  blog iphone rss twitter fingers fat reader 6 Comments 11,662 Views
Shock horror, I am starting to question the value of my iPhone.....
Actually... I am pondering whether the iPhone is the ideal social media tool I thought it was.
So whats the problem?
The problem is that while using the iPhone for this purpose I rarely interact, I never create content, I rarely contribute to content. By this I mean I almost never comment on blogs and I hardly ever twitter. This is quite simply because I hate using the keyboard. More and more I leave both twitter and feed readers to when I have time to actually interact via a real keyboard.
So I am curious... does anyone else feel the same way? Are there better devices out there for interaction? Or is it just my fat fingers and slow brain getting in the way?