WOW! - The Economics of Storage Virtualization
Woj 270003B7NV Visits (1026)
Virtualization is at the core of technology for enabling both traditional and new era workloads. Whether you call it a cloud, software defined environment, or just a well-run data center.
But the technology itself isn’t nearly as interesting as the business results it provides clients that take advantage of the technology. I’m going to explain how businesses can use data virtualization to get traditional workload costs under control, and simplify the onboarding of new era workloads.
While the world was going through its usual tribulation and transformation over the past three decades, so too was the IT industry. For most clients, the history of virtualization began when VMware shipped their first product back in the late 90’s. However, for perspective, it’s important to know the first VMware didn’t release until 3 decades after virtual servers were introduced on IBM System 360 mainframes. That’s right, 30 years later. … And, 25 years elapsed between the first IBM storage virtualization patents and the first vendor-neutral virtualization appliances, led by IBM SAN Volume Controller (SVC). In both cases, adoption rates were slow until vendor neutral versions were available, and even then, widespread adoption took another 10 years. If the cycle repeats, we should be seeing an explosion of data and storage virtualization adoption, well… right about now. And guess what, that’s exactly what’s happening!
Right now, here in 2014, we see virtualization at the core of Software Defined Storage and cloud infrastructures.
At a high level, Software Defined Storage means you have a software layer between your applications and physical storage, a storage ‘hypervisor’ if you will. Its software that abstracts one from the other and radically simplifies the way users and applications access their data and manage their storage devices. In a virtualized, Software Defined Storage environment, data moves freely between physical storage systems just as VMware images move between physical servers. For management, you no longer have to manage data differently on each brand of storage system.
When applying a virtualized, Software Defined, storage infrastructure to traditional workloads you can:
You may be surprised to know storage inefficiency wastes up to $11 million per petabyte over 5 years. For a typical large enterprise with a few petabytes of storage, and normal data growth; the cost of inefficiency can easily exceed $50 million. – That’s real money.
Just this month, (May 2014) industry analyst, ITG, published a cost/benefit analysis reports for data and storage management, comparing several industry solutions, including IBM. These reports highlight key factors that impact storage total cost of ownership, including:
Let’s take a look behind the headlines.
Capacity: The ability to increase storage capacity utilization is a TCO miracle -- A CFO’s dream. You get more usable storage without the storage purchases or the related expenses like power, cooling and floor space. Storage virtualization has been widely proven to improve storage capacity utilization by up to 100% across a number of diverse workloads. In traditional environments, or sometimes called systems of record, utilization hovers around 30 to 40 percent. At IBM, and several of our customer’s sites, we see 80 to 90 percent average capacity utilization where IBM storage virtualization is deployed. That’s significantly better utilization than you can get from other storage virtualization vendors.
Software and Support: Let’s look at the, what we sometimes call ‘indirect’ or hidden costs, the software and people to support storage systems. Software and support costs for individual storage systems can be phased out or eliminated, in favor of software that runs at the storage hypervisor level, and works across multiple brands of storage systems. Capabilities implemented at the data virtualization layer are faster and easier for administrators to use, because they’re consistent across storage systems. Virtualization enables administrators to manage data separately from storage systems, so tasks like data migration can be performed anytime, even when applications are active.
Tier Optimization: We all know Tier 1 storage costs more than Tier 2, but did you realize a petabyte of data on Tier 1 storage costs up to $12 million more over 5 years? Now the concept of storage tier optimization is nothing new. It's been talked about for years, but not delivered in a meaningful way until relatively recently. One of the secrets of successful tier optimization is analytics. Using analytics, you can optimize storage in an instant -- No analysis time. No outages. No arguments with data owners. This is key because if you win the tier optimization battle, your one step closer to winning the efficiency war.
Server virtualization has been widely adopted because it, well, quite frankly, it works. It:
Data virtualization has similar impacts on storage, with significant financial results. About 50% of our servers are virtualized, but less than 20% of our storage is virtualized. That means there is a big opportunity. In fact, taking advantage of storage virtualization could be the biggest and best financial move a client can make in 2014.
I think there are 3 differentiators that really matter:
1. ‘Analytics driven data management’ enables automatic tier optimization. This breakthrough ‘first to market’ capability from IBM Research uses analytics and automation to move data to the ‘right’ storage tier based on actual usage patterns. No guesswork. Three years ago, this technology was proven in a First of a Kind deployment inside IBM, inside our own datacenters. It cut our cost of storage in half. Since that internal research project was so successful, we have hardened it, packaged it, and now make it commercially available to everyone. And it’s one of the big reasons why IBM can lower your storage TCO by 72%, compared to EMC and 35% as compared to VMWare.
2. IBM’s Investment in open standards helps speed adoption of new technologies. Both new capabilities IBM can deliver to the market, and the speed in which you can take advantage of those new capabilities.
Just a few examples of some of the open standards we actively contribute to include:
3. Data management at the storage hypervisor level reduces cost and complexity. You can standardize services and optimize service levels for users and the applications using the storage devices.
The message is simple, you simply can’t afford to do nothing. We all evaluate the risk vs reward when we look to invest in something, but this is a case where every day you wait, your waisting money. If you assume you would only save half of what I spoke about earlier… instead of saving $11M per PB in improved capacity –or- $12M in tiering, you only saved $5M per PB over 5 years, do you realize that every day you wait, every day including weekends, it costs you $2,700 a day! That’s over $83,000 / month!