2008 - A retrospective
orbist 060000HPM5 Comments (6) Visits (12874)
It seems a common thread in technology blogs to post some predictions for the coming year. However, as I'm not an analyst and don't profess to understand the entire storage industry, I thought I'd keep to a brief history of 2008 as I saw it - obviously with an IBM and SVC slant on things. This isn't just a vendor marketing affair however, and you'll notice a few things missing, but where are we as I see it today.
Caught in a Vapor-trail
I see a lot of folks off in the cloud, and yes it all looks very interesting and conceptually *nice*. However, given how hard to has been to break with the conceptual thingy when it comes to storage... ( SAN's were a leap from DAS, Virtualization was the next leap (which is being powered by the Server side virtualization)) but the Cloud... isn't that just Services wrapped up in a fancy sounding term and potentially bloat-ware?) IBM already offers an MSS (Managed Storage Service) where you let us handle all the day to day admin, and just provision the storage you need. Who manages the Cloud? Until its autonomic itself, the management *could* be a nightmare. It will come, and it will get better, but as Martin has pointed out from an end users perspective (backed up by many many vocal end users) and not refuted by any vendor - today's storage management leaves a lot to be desired. If we 'vaporize' all our bits and bytes into the cloud, without the management to back it up (and free management) then we are doomed. You could refer to this as vapor-ware - and many a pundit did after EMC's Atmos 're-announcement'. One other thing that Chris M Evans got me thinking about in a Cloud is security... I won't steal his 'thunder' as I'm sure he has a post coming on that, but physical security is as much a requirement as logical security...
As I say, these are just my thoughts from my little corner of the storage world, but to me, people are struggling with more fundamental issues. I need to store, I want to store, I am storing... where does the division lie. Where can the software intervene and determine - yes if I don't store this I will be in breach of my requirements, if I do store, but don't look at for a while - I can discard... All of this is much more interesting, and to enable such intelligence you need some kind of software in the middle of it all that can make decisions. I see products, or appliances like SVC as the first step in this world. No need to install hundreds of agents on all your multitude of servers, just the one place. In the SAN (or LAN) and let the appliance decide. These aren't the only part of the picture, but they do make a very powerful statement, as our many happy SVC customers will agree.
OK, so we are a while off that yet, but 2008 showed the promise of such an appliance based approach. Back in June IBM released version 4.3.0 of the SVC software (the 12th software release infact) that still ran on the same hardware we'd release 5 years before - not to mention the 5 hardware vamps we've done over the years - 1 a year - not many storage products can match that for pace, both in hardware and software terms. This again backs the architectural thinking behind a commodity appliance with advanced function engines.
4.3.0 brought Space-Efficient Virtual Disks - aka Thin Provisioning. The ability to only allocate storage (at a fine granularity 32K to 256K) and despite what Hu would attempt to lead you to believe behind the reasoning for the USP(V&VM)lazy-man chubby-provisioning approach, the SPC-1 benchmark (other than NetApp) the only Thin-Provisioned vs fully allocated benchmark published showed that you need not have performance impacts when using true thin-provisioning. I made the reference to 3Par being the grand-daddy of Thin, yet they have not submitted such a result...
4.3.0 also brought the ability to run Virtual Disk Mirroring, in a kind of LVM mirroring style. Here though you could mirror across multiple storage controllers, without any impact on the host server performance. This was a key feature being requested by numerous enterprise customers.
Meanwhile, performance testing all of these functions, making some strides in the basecode that could allow SEV to perform so well, and running a research project kept the early part of my year seriously busy.
EMC would lead you to believe that they invented the "EFD" as they like to call them "Enterprise Flash Drive" - we like to call them ZeusIOPs SSD's from STEC Inc. Like EMC, HDS, SUN etc we'd been working with STEC, at the time they were the only form factor SSD vendor that could provide FC-AL or SATA drives that would be able to survive in the Enterprise storage market. At 30x the $/GB however the performance gains do surprisingly still stack up - but only if you can provide ALL that performance to the end user. Yes vendors have taken the easy route and brought them to the market in existing disk based controllers. I heard some interesting information that it was partly comments I made back in 2007 that lead EMC to pay a lot of money to sign an exclusive deal with STEC - but I do find the fact that EMC jumped which actually caused all the other vendors to get some serious internal traction, the same happened in IBM with a few projects that had until then only been simmering... what goes around comes around I guess.
Anyway, one of the nice things of working in the arena of SVC performance is that we've always been up at the front when it comes to general throughput - as far as single system performance is concerned. However, even I was worried. These SSD drives (on reads at least) would easily saturate the system. OK, so nobody does 100% reads, but even scaling back, it doesn't take many of them to saturate todays storage controllers. Given that a DS8K has higher performance than an EMC DMX and knowing how many of these devices a single DS8K needs to be saturated, its not surprising how few of these can be added to a DMX quadrant. What was surprising is how quickly the response time tails off according to Mr Burkes own presentation.
Anyway, there are two cases, response time, and throughput. Granted, as long as you have a decent system, response time will be improved (somewhere in the middle of cache hits and disk reads / writes) but again, as you increase the number of operations, the response time will suffer as the system gets too busy... but not if you have enough scale-out at your disposal.
Quicksilver as it was externally known created quite a storm. As it was a technology statement (not a science experiment as our EMC blogger friends called it) more of a SOD (Statement of Direction) - but it wasn't an end product. It was really about showing that a scale out clustered system has much more relevance moving forward than a single box monolith. I'd been working on testing various SSD devices during the latter half of 2007 and early part of 2008 when our executives asked if we could demonstrate and benchmark a single system capable of over 1 million IOPs to "disk". The gauntlet was thrown down and we set about meeting this goal. You could of course take various SSD vendors claims at face value and suggest that this would be easy. However, setup a configuration that can generate 1M mixed reads and writes, wire it all up and oh yeah, attach it to a SAN and you may have to go out and buy a bunch more SSD than the numbers would suggest. Anyway, history is history and we partnered with FusionIO to demonstrate what we wanted to show, scale out vs scale up. Scale out wins, oh and before Mr Burke chirps in, response time, yes, we maintained that too at under 700 microseconds while maintaining the 1 Million IOPs.
I wouldn't even like to do the maths to work our how many full frame monoliths we'd need to achieve the same performance - and just think of the power and cooling requirements.
Towards the end of the year I found myself spending more and more time with customers, the Quicksilver release got lots of people interested and I spend a lot of time in briefings, discussing customers needs and where and how such performance could really make a difference to their everyday workloads. Meanwhile the SVC team were working on a cost-reduced version of the SVC appliance, known as the Entry Edition (yes, a lot of confusion over EE as a lot of the time its refers to Enterprise Edition...) Anyway, many customers had requested a lower entry cost to virtualization and the SVC EE nodes provide just that, with the ability to cluster and scale as a business grows.
Already two weeks into 2009 and this is my first post, so Happy New Year to all my readers - I did start this post last year, but have been coming back to it as time permits, and some folks got me started with Twitter. All to easy to waste away time with fellow twits ;)
Here's to another bumper year in 2009, and I'm sure its going to be fun.
Full SPC-1 result available at : http