Storage Virtualization: What are you waiting for?
Woj 270003B7NV Visits (4255)
Your storage capacity isn’t virtualized? Don’t worry you’re not alone. The concept of virtualization of IT resources is broadly understood, however the deployment and usage of virtualization isn’t as balanced across the datacenter. Virtualization of compute –or- server runtime is commonplace, widely referred to as virtual machines (VM’s) but storage, network, desktops, and other areas that make up the datacenter can just as easily be ‘virtualized’ and deliver value. In this article, I’m going to outline why every datacenter should implement storage virtualization.
Storage virtualization simply means implementing a technology layer that abstracts the physical storage devices from the applications and users and allows all the devices to be viewed, used and managed as a single pool. Sometimes also called a “storage hypervisor”, the value proposition here is relatively straight forward, to allow effective allocation and management of heterogeneous storage resources with no downtime. Most organizations implement storage virtualization for a very simple, tactical reason…the ever increasing creation of data is increasing the demand for more storage. So much so, that organizations cannot merely buy more capacity at the same rate as the data growth, it’s simply unaffordable. Enter virtualization…
Storage virtualization enables more efficient utilization of all types of storage, NAS, SAN, and disks by masking the various storage hardware types to the applications using it and at the same time reduces the complexity. This might sound counter-intuitive, “add another layer and reduce complexity,” but isn’t this what we have done with virtual server infrastructures? We’ve added a layer between the physical hardware and the applications that run on them. In the same way, a storage hypervisor can reduce the number of things (storage devices) administrators have to interact with down to one thing (the storage hypervisor) with a single interface, and having a single management interface truly does reduce complexity while at the same time giving additional benefits in efficiency, data density (less unused storage), much easier device maintenance, and disaster recovery to name a few. Eliminating isolated or underutilized SAN or NAS devices can now provide capacity for applications with the most demand. Devices that are old and inefficient, or don’t meet today’s technology standards, or have expiring leases all can be swapped out with no interruption of service to those applications on the other side of virtualization layer. Speaking of applications, there is benefit from this angle as well. No longer does the application owner need to ask the storage administrator to provision a device, create a LUN, plan for growth, etc. They can simply point their application to the hypervisor that the storage admin has set up, and go. Provisioning a virtual storage resource can be as quick and efficient as provisioning a virtual server resource. The application owner loves this because they point and go, the storage admin loves this because they can offer up an interface to the users and simply monitors the applications connecting in, monitor the data being created across ALL applications, and plan at a much more aggregate level, while optimizing the capacity being used. All this, while allowing the storage admin to stay in control of their storage domain, and the application owners keep control of their apps. I can almost hear the hummmm of a zen chant now.
Best of breed vendors can also synchronize across virtualization systems for replication, hot failover and disaster recovery. Imagine having storage capacity at two different locations up to 300km apart (the reason for this limit is that technology simply has not figured out how to transfer data faster than the speed of light!) and having them in-sync, all the time. If one site goes down (planned or un-planned) the location still on-line can maintain operations, dynamically, without any disruption. It’s a virtual datacenter that operates across two physical datacenters. That is a very compelling capability for organizations that are running mission critical applications accessing mission critical data. This is a very powerful capability and is typically only available from vendors who package their virtualization capabilities as part of a hardware appliance to ensure optimal performance and through put. Remember, this type of system must be fast enough as to not inject any latency into the systems. Since this type of capability lies directly in the data path, with very high IO demands, this cannot be a bottleneck of performance.
I’ve only scratched the surface of storage virtualization here, and those organizations that have implemented it, are leaders in the continuing transformation of IT within their organizations. Those that have not yet begun their project are simply not yet far enough along in their ‘comfort curve’, but when they do start to play with the capabilities, they are in for a pleasant surprise. I expect storage virtualization to enjoy the same level of adoption and commonplace in the datacenter as server virtualization has, only at a much greater pace. The concept is well known and proven, now it’s just a matter of implementation in the next IT domain…Storage.