Here we go again
orbist 060000HPM5 Comments (8) Visits (5072)
So I almost 'did a Tony' just now, spraying my coffee over my screen and keyboard. The reason for this, the recent post from Hu over at HDS. (Its been a while since I even bothered to read his posts, and even longer since I commented on them...
I think I can see why Hu no longer 'interacts' with his readers, its because he makes up so much nonsense to try and paint the USP as the only way to do virtualization and would find it hard to justify / backup his statements when questioned. Why does he keep doing this? Does he feel threatened? But the sheer number of un-truths in his post just had to be commented on. If you want a one sided and totally biased view of virtualization go and read his post. Its a cracker for the amusement factor. The reason I always pick up on this is that I don't see why he can't admit that both places have their merits. That is, in the SAN and in the controller. The SAN is more difficult to do, but we've already done that, and haven't they basically done this too with the USP-VM - so is Hu saying, the USP-VM doesn't work, you need USP-V? Thats how I'd read it but he's just plain wrong in so many ways.
PS. Before I start quoting him and giving you my take, rest assured I have so many times explained why SVC and USP-V are very similar, there are more benefits to doing it in the SAN than in the controller - but thats mainly due to the artificial limitations of USP-V today (like upgrades etc). I'd agree that the intelligent switch model seems to have died, we tried it and the cost was x2 and the performance x0.5 - so the appliance was a much better deal. EMC are still sticking to Invista, but not many customers are - yet again - if you are using Invista in production, please comment below as I'd love to hear what you think of it?! Anyone?
Hu's comments in green...
The main reason SAN based storage virtualization did not take off was that it did not fulfill the expectations for storage virtualization. While it did provide a degree of volume management, it did not mask complexity and it did not aggregate storage services to enhance lower level storage, and increase utilization.
What planet is he on? SVC masks the complexity of having numerous different storage controllers presented to your hosts, conflicting multi-pathing issues etc etc. SVC's primary function is to aggregate storage, and as per comments on this very blog from our users, the beauty is that it enhances 'low-cost RAID controllers'. Most of our customers are finding that they are meeting our claims of increased utilisation, from around 20% before SVC to around 75% after. So everything he says here is wrong with evidence to back this up.
It adds latency, especially if you are doing a copy or move of a volume, does not scale, and does not work well with direct attach storage, virtual servers, mainframes, and thin provisioning.
So we do add around 50us of latency in the one - read miss case, but compared with the 10ms of latency from disk, this is nothing. Any kind of virtualization layer will add some number of us of latency - I'm sure USP does too, it has to go through the internal switch to get to the external storage, so thats through their box too. If you are doing any kind of FlashCopy (based on copy on write) you increase latency, this is not just an artifact of SAN virtualization, its a fundamental of all storage controllers too. Moving data around has no direct impact on latency at all. The move is distinct from any host I/O operations. The only time it may impact things is if the underlying storage is already at its limit with the host I/O its handling and hence the same is true for any storage controller too. Again, nothing more than FUD there.
Does not scale? Where has he been, he's obviously not reading me, as SVC has perfectly linear scaling as you add more nodes, so the workload scales. We've run larger clusters in the lab, the reason we haven't yet release support for larger clusters is that the node hardware itself is scaling up every year or so. However I'd throw this one back that USP doesn't scale. (Nobody believes the ridiculous 290+ PB claims HDS are making. (Just look at the 200K IOPs SPC-1 number for internal storage, 200K IOPs over 290+ PB doesn't go very far does it?!). Its a single box and once you've reached its limit, you need to buy another, and then manage two boxes. Adding more nodes to an SVC simply adds to the existing cluster and you still have a single entity to manage.
Not sure what he means by not working with direct attached storage? Does he mean storage in the hosts? Then again, what has this to do with SAN virtualization? If he means sharing a controller, i.e. part of the controller directly attached to a host and the rest behind SVC, we can do that.
Virtual servers, so this is just and unbelievable statement. I think Hu needs to read the VMware Storage Virtualization certification document, where SVC was the first device to receive full VMware certification. (When I last looked we were still the only one - maybe there are more now)
Mainframe, ok I'll give him that one, we don't support CKD, only fixed block. But then zSeries already has about 90% storage utilization, virtualization, copy services all built in, so all it needs is fast reliable storage - like DS8000.
Finally thin provisioning, so he hasn't been reading the register either, and most definitely not reading me here either...
So he got one point right out of seven - the rest of this is pure nonsense, I wouldn't even credit it as FUD, its a step beyond even that, oh and everything he says about controller virtualization, using copy services, advanced functions and data migration you can do just as easily with SVC, and we don't limit the port throughput on your external controllers like USP does... I credit you all with more intelligence than Hu does, USP-V isn't the only solution to the problem as Hu would try and lead his readers to believe.