Comments (4) Visits (18487)
If you store your VMware bits on external SAN or NAS-based disk storage systems, this post is for you. The subject of the post, VM Volumes, is a potential storage management game changer!
Fellow blogger Stephen Foskett mentioned VM Volumes in his [Introducing VMware vSphere Storage Features] presentation at IBM Edge 2012 conference. His session on VMware's storage features included VMware APIs for Array Integration (VAAI), VMware Array Storage Awareness (VASA), vCenter plug-ins, and a new concept he called "vVol", now more formally known as VM Volumes. This post provides a follow-up to this, describing the VM Volumes concepts, architecture, and value proposition.
"VM Volumes" is a future architecture that VMware is developing in collaboration with IBM and other major storage system vendors. So far, very little information about VM Volumes has been released. At VMworld 2012 Barcelona, VMware highlights VM Volumes for the first time and IBM demonstrates VM Volumes with the IBM XIV Storage System (more about this demo below). VM Volumes is worth your attention -- when it becomes generally available, everyone using storage arrays will have to reconsider their storage management practices in a VMware environment -- no exaggeration!
But enough drama. What is this all about?
(Note: for the sake of clarity, this post refers to block storage only. However, the VM Volumes feature applies to NAS systems as well. Special thanks to Yossi Siles and the XIV development team for their help on this post!)
The VM Volumes concept is simple: VM disks are mapped directly to special volumes on a storage array system, as opposed to storing VMDK files on a vSphere datastore.
The following images illustrate the differences between the two storage management paradigms.
You may still be asking yourself: bottom line, how will I benefit from VM Volumes?Well, take a VM snapshot for example. With VM Volumes, vSphere can simply offload the operation by invoking a hardware snapshot of the hardware volume. This has significant implications:
Here's the first takeaway: With VM Volumes, advanced storage services (which cost a lot when you buy a storage array), will become available at an individual VM level. In a cloud world, this means that applications can be provisioned easily with advanced storage services, such as snapshots and mirroring.Now, let's take a closer look at another relevant scenario where VM Volumes will make a lot of difference - provisioning an application with special mirroring requirements:
Here's the second takeaway: With VM Volumes, management is simplified, and end-to-end automation is much more applicable. The reason is that there are no datastores. Datastores physically group VMs that may otherwise be totally unrelated, and require close coordination between storage and VMware administrators.
Now, the above mainly focuses on the VMware or cloud administrator perspective. How does VM Volumes impact storage management?
VM's are the new hosts: Today, storage administrators have visibility of physical hosts in their management environment. In a non-virtualized environment, this visibility is very helpful. The storage administrator knows exactly which applications in a data center are storage-provisioned or affected by storage management operations because the applications are running on well-known hosts. However, in virtualized environments the association of an application to a physical host is temporary. To keep at least the same level of visibility as in physical environments, VMs should become part of the storage management environment, like hosts. Hosts are still interesting, for example to manage physical storage mapping, but without VM visibility, storage administrators will know less about their operation than they are used to, or need to. VM Volumes enables such visibility, because volumes are provided to individual VMs. The XIV VM Volumes demonstration at VMworld Barcelona, although experimental, shows a view of VM volumes, in XIV's management GUI.
Here's a screenshot:
That's not all!
Storage Profiles and Storage Containers: A Storage Profile is a vSphere specification of a set of storage services. A storage profile can include properties like thin or thick provisioning, mirroring definition, snapshot policy, minimum IOPS, etc.
Note that when a VM is created today, a datastore must be specified. With VM Volumes, a new management entity called Storage Container (also known as Capacity Pool) replaces the use of datastore as a management object. Each Storage Container exposes a subset of the available storage profiles, as appropriate. The storage container also has a capacity quota.Here are some more takeaways:
To summarize the VM Volumes value proposition:
For additional information about VM Volumes, check out [VMware Storage APIs for VM and Application Granular Data Management] blog post by Duncan Epping, a Principal Architect in the Technical Marketing group at VMware!
Until you can get your hands on a VM Volumes-capable environment, the VMware and IBM developer groups will be collaborating and working hard to realize this game-changing feature. The above information is definitely expected to trigger your questions or comments, and our development teams are eager to learn from them and respond. Enter your comments below, and I will try to answer them, and help shape the next post on this subject. There's much more to be told.
Comments (2) Visits (11736)
Well it's Wednesday, and you know what that means... IBM Announcements.
(Normally, announcements are on Tuesdays, but we moved this one over to Wednesday to line up with our big launch event in Pinehurst, NC. )
A lot was announced today, so I decided to break it up into several separate posts. I will start with our Enterprise Systems: DS8870, TS7700 Release 3, and XIV Gen3.
Enterprise systems are the servers, storage and software at the core of an enterprise IT infrastructure. Enterprise systems enable a private cloud infrastructure at enterprise scale, with flexible service delivery models that provide dynamic efficiency for resource and workload management. They make sure critical data is always available across the enterprise, making it accessible in new ways so that actionable insights can be derived from advanced and operational analytics. They also provide ultimate security, ensuring the integrity of critical data while mitigating risk and providing assured compliance.
To learn more about all of the announcements today, see the [Storage Landing Page].
Well, it's Tuesday again, and you know what that means! IBM Announcements!
Today, I am pleased to announce that IBM has published a Redpaper on [IBM Smart Storage Cloud].
I worked with the IBM Redbooks residency team to review this paper and ensure it had the right focus. I did not want a Redpaper that just listed all of the IBM technologies available, but rather spend some effort on the business benefits, and realistic use cases with actual client examples, that help illustrate not just what a Smart Storage Cloud is, but why your business may benefit from having one, and how others have already benefited from their deployment.
To help promote this new Redpaper, my colleagues Larry Coyne and Karen Orlando filmed me talking about the book. This has been posted as a [4-minute YouTube video]. This is the first time we have promoted a Redpaper using a video, so let me know what you thinkk in the comment section below.
We have some exciting webcasts in the upcoming weeks!
I hope you can find time in your busy schedule to participate in one or both of these webcasts.
New IBM PureData Systems help clients harness data for critical insights
Well it's Tuesday, and you know what that means! IBM Announcements! Actually, it is Wednesday, but I started writing this post yesterday, and had to do some additional research to finish.
This week, IBM introduced the newest member of the PureSystems family of expert integrated systems - IBM PureData System. The new systems are designed to help clients effectively harness the massive volume, variety and velocity of information being created every day. The result? They deliver critical insights to improve business results.
The new systems are available in three different models, each optimized specifically for different workloads.
Here's a quick 3-minute [YouTube video]:
PureData System joins the PureSystems family which also includes the PureFlex System and PureApplication System, [both announced last April]. PureSystems provide built-in expertise, integration by design and simplification through the system lifecyle, helping businesses reduce complexity, accelerate value and improve IT economics.
In a related announcement, Andy Monshaw was recently named IBM General Manager, PureFlex. Some of you readers may remember that Andy Monshaw was previously the General Manager for IBM System Storage several years ago, and was my second line manager, and I am glad to welcome him back!
For more information, see the [IBM PureSystems landing page].