Why did you do it like that?
orbist 060000HPM5 Comment (1) Visits (4080)
This morning one of my colleagues in my office bay started a discussion about the fine line between divulging useful technical information on this blog-o-sphere while maintaining the confidentiality of technical information, product plans and general intellectual property of IBM. I explained to him that I think of this as I would a customer forum. That is, when I wonder if I should be talking about something I consider if I'd discuss it freely when visiting, or with visiting customers who are not under NDA. If the answer is yes, then I consider it fair game. My colleague commented that in his previous department there was very little customer interaction and it was something he wanted to address in his new role.
This moved the discussion on to how I find it extremely useful to understand how and why customers are interested in SVC and how they plan to use it. Sometimes this opens a whole new area of thought or innovative way in which virtualization could be used, that I / we have never even considered.
Like Chuck, I enjoy this kind of meeting, where you get a chance to discuss the technical aspects of the product, how we solved common problems and so on - especially with like-minded technical folks that have a deep understanding of their business needs and environment.
So here's a starter for ten, with a couple of the most commonly asked questions.
Why did you chose a commodity Intel based platform over either custom hardware or a pSeries type box?
There are several very good reasons for this decision, most of which have already proved that this was a very wise decision.
SVC only has 8 ports per IO group, 32 in an 8 node cluster - my storage controller has many more and I've been told I need that many?
This all boils down to how well you make use of what you have. Dare I bring up the HDS USP-V recent benchmark. One point that hasn't been covered elsewhere is the number of ports used. In this case HDS used 64 ports to achieve their ~200K benchmark. Compare that to SVC's ~270K with half the number of ports and you see my point.
Because we have the backend BUS capability to drive the 4 ports to their maximum it means that where other products need many more ports to cover up for the lack of bandwidth behind the ports, SVC can drive the ports to almost saturation (not that this would be generally recommended unless your fabric can cope).
Many products require more ports to enable host fan out - for example I understand that DMX can only share ports with multiple hosts of the same type (folks please correct me if I am wrong) As SVC is only talking to the switch (so you get the fan out from the switch ports) and caters for shared access (only recommending you zone like hosts together - for host non-interop reasons) you don't need the extra ports for fan out/limitation reasons.
As an example, the table below shows the maximum theoretical bandwidth vs what I measure under various test conditions (all random workloads using a 2 node SVC) :
Notes: # These are provided as an example of max rates I have measured under lab stress conditions. * Half the read bandwidth due to mirroring of write cache data. ** I have included the miss measurements however these are purely dependent on the back-end read and write data rate capability.
I'll cover some more intersting and FAQ type questions in later posts, for now, enjoy your weekend.