I have another movie recommendation - but it's not one that I've seen (yet). My family went to see the new Disney movie Frozen and came back thoroughly enchanted. To me that means more than a dozen reviews from stuffy critics. In terms of audience demographics, my family ranges in ages from 6 to, uh, I think late 20s for my wife (or very close to that).
Now the title of this story ,and much of the setting, has really hit home for me this winter. We've had massive amounts of snow, and went through the tribulation of an ice storm (knocking power out for about a day and a half). Even a few hours without electricity will remind you of the importance of infrastructure. Here's a picture of my son Zeke clearing our infrastructure - I mean driveway, just this week.
Speaking of the importance of infrastructure, I've been thinking a lot about the data transfer infrastructure for large disk systems (and by disk, I mean both traditional spinning disks and solid state devices (regardless of form factor). John Elliot, who is the lead hardware designer on the DS8000, came to Maryland to speak to some customers last month. He was asked if we plan to support infiniband on the DS8000. The answer is no - not because it's a bad interconnect - we use it internally in our XIV systems for one thing, and in the coupling links for our zEC12 mainframes - but there just hasn't been any market demand for it in the market space that the DS8000 serves.
So what is the dominant interconnect in the storage space? No surprises here - it's still fibre channel. I was asked in the last week what my outlook is for fibre channel connectivity, and should a customer switch to FCoE. My opinion is that fibre channel will continue to dominate through the end of this decade. Today the majority of customers are on 8Gb (yes, I know 16Gb has been available for a couple of years now, but the adoption across the whole datacenter is still underway). 16Gb has some a great advantage in efficiency (64b/66b encoding is 98% efficient vs the 80% of 8b/10b used in 2/4/8 Gb fibre channel), but it's time is not come yet, and won't for another year or two.
So what's the deal with FCoE? IBM has a number of storage systems that support it (see the redbook, "Storage and Network Convergence Using FCoE and iSCSI"). I'm not saying this is a bad protocol either - an interesting study published as a redpaper shows that in many cases 10Gb iSCSI can have equal or even better performance than 8Gb fibre channel.
The point is, I see no reason to rip out a perfectly good fibre channel implementation in order to put in ethernet, whether iSCSI or FCoE. Why - because a vendor says "it's cool"? I think you have better things to do with your money. People running datacenters have built up a huge knowledge base in running fibre channel infrastructures. Those infrastructures are fast and reliable - so why change?
Oh, and about that DS8870 with an infiniband connection: my guess is we'll be glad to build it. Just show me the order for DS8870s, quantity 50 or higher, and we'll get right on it.