I routinely follow a number of blogs by storage industry thought leaders. Among them is a usually insightful blog by EMC’s Chuck Hollis. Last Friday I read his post titled Software-Defined Storage – Where Are We? As Chuck described, the post was intended to explore “Where are the flags being planted? Is there any consistency in the perspectives? How do various vendor views stack up? And what might we see in the future?” The questions themselves captured my attention. First, they are great questions that everyone who is watching this space should want answered. Second, I wanted to see which vendors EMC was interested in comparing with. Notably missing from Chuck’s list was IBM, a vendor who both has a lot to say and a lot to offer on the subject of software defined.
I thought Chuck did a nice job in the sections of his post on Basic [Software Defined Storage] SDS Concepts and Towards a Superset of Characteristics. My only critique would be that he didn’t acknowledge some of the forward leaning work being done in the space. For example, in the area of concepts he rightly observed of the past that “there is little consensus on what is software-defined storage, and what isn’t” but he failed to acknowledge the important work by the team at IDC in providing the industry with an unbiased nomenclature and taxonomy for software-based storage. See my post from a couple months back on How do you define Software-defined Storage? Chuck also suggested that “the required technology isn’t quite there yet — but there are all signs that it’s coming along very quickly. By next year, there should be several good products in the marketplace to concretely evaluate.” That may be true for EMC and the rest of the vendors he chose to talk about, but by the end of this post I hope you will understand that when it comes to IBM, Chuck’s statement is several years behind.
The aim of software-defined
Software defined storage isn’t an end unto itself. It is a necessary piece in the evolution to a software defined environment (SDE), also referred to as a software defined datacenter IBM Software Defined Environments(SDDC). I like IDC’s definition of what this is, “a loosely coupled set of software components that seek to virtualize and federate datacenter-wide hardware resources such as storage, compute, and network resources and eventually virtualize facilities-centric resources as well. The goal for a software-defined datacenter is to tie together these various disparate resources in the datacenter and make the datacenter available in the form of an integrated service…” IBM is one of the few vendors who are working in all the areas of software-defined and Jamie Thomas, Vice President and General Manager of Software Defined Systems is the head of the division that coordinates that work
Jamie thinks about SDE from the perspective of workloads and patterns of expertise that can help simplify operations reducing labor costs and improving security. A software defined environment is also more responsive and adaptive as workloads expand from today’s enterprise applications to mobile, social, big data analytics and cloud. Her view is that open source and standards communities are crucial to the long term viability of SDE. IBMs work in software defined compute with the Open Virtualization Alliance and oVirt, our work in SDN with Open Daylight, and our work in cloud with OpenStack is helping propel the construction of software defined environments.
IBM’s work in software defined storage
The words have morphed over time. What VMware did for Intel servers has been referred to as a hypervisor, as virtualization, and now is being called software defined compute to line up with the rest of the SDE vocabulary. The foundation of a software defined environment is, well, software that offers a full suite of services and federates physical infrastructure together to provide the basic commodity. In the case of VMware, the commodity is Intel megahertz. In the case of SDS, the commodity is terabytes.
IBM clients first began using these capabilities in 2003 with the IBM SAN Volume Controller software drawing its compute horsepower from commodity Intel processors and managing terabytes provided by federated disk arrays. That software base has since been renamed to the Storwize family software platform and given an expanded set of commodity engines to run on. Today, there are federating systems with no storage capacity of their own, systems with internal solid-state drives to speed the input/output (I/O) of other federated storage, and systems that carry their own serial attached SCSI (SAS) disk and flash capacity to augment other federated capacity. There are entry models, midrange models, enterprise models and even models that are embedded in the IBM PureSystems family converged infrastructure. For a more complete description of the suite of services offered, the breadth of physical storage that can be federated, and the I/O performance that can be enjoyed, see my post Has IBM created a software-defined storage platform? Over the last decade, this software platform has been referred to as virtualization, as a storage hypervisor, and now with a total capacity under Storwize software management on its way to an exabyte, we call it SDS v1.0.
SDS v2.0 came along early in 2012 with the introduction of IBM SmartCloud Virtual Storage Center (VSC). Building on the successful base of the Storwize family software platform, VSC added a number of important capabilities.
Service catalog: Administrators organize the suite of VSC storage services into named patterns – catalog entries. Patterns describe workload needs in terms of capacity efficiency, I/O performance, access resilience, and data protection. For example, a pattern for ‘Database’ might describe needs that translate to compressed, thin provisioned capacity on a hybrid flash and SAS pool, with a single direction synchronous mirror and load-balanced multi-path access. The beauty of the service catalog is that requestors (application owners or orchestrators as we’ll see shortly) don’t need to concern themselves with the details. They just need to know they need ‘Database’ capacity.
Programmable means of requesting services: VSC includes API’s that surface the service catalog patterns to portals and orchestrators. The questions that must be answered are quite simple. How much capacity do you need? In what service level do you need it? Who needs access? From there, storage-centric orchestration takes over and performs all the low level mundane tasks of satisfying the request. And it works on a wide variety of physical storage infrastructure. The VSC API’s have been consumed by an end-user accessible portal, SmartCloud Storage Access, and by higher level SDE orchestrators like SmartCloud Orchestrator.
Metering for usage-based chargeback: Service levels and capacity usage is metered in VSC. Metering information is made available to usage and cost managers like SmartCloud Cost Management so that individual consumers may be shown or charged for their consumption. Because VSC meters service levels as well as usage, higher prices can be established for higher levels of SDS service. Remember IBM’s perspective, we are building out SDE of which SDS is a necessary part. SmartCloud Cost Management follows the model providing insight into the full spectrum of virtualized and physical assets.
Management information and analytics: When the challenges of day-to-day operations happen (and they do happen most every day), administrators need straightforward information surrounded by visually intuitive graphics and analytic-driven automation to speed decision making and problem resolution. Last year we SmartCloud Virtual Storage Center management and analytics introduced just this approach with SmartCloud Virtual Storage Center. I discussed it more thoroughly in my post Do IT managers really “manage” storage anymore? If you watch the news, you’ll know that IBM is leading a transformation toward cognitive computing. We’re not there yet with the management of SDS, but consider this scenario. You are an IT manager who has invested in two tiers of physical disk arrays, probably from different vendors. You have also added a third storage technology – a purpose-built flash drawer. You have gathered all that physical capacity and put it under the management of a software defined storage layer like the SmartCloud Virtual Storage Center. All of your workloads store their data in virtual volumes that SmartCloud Virtual Storage Center can move at-will across any of the physical disk arrays or flash storage. Knowing which ones to move, when, and where to move them is where SmartCloud Virtual Storage Center excels. Here’s an example. Let’s suppose there is a particular database workload that is only active during month end processing. The analytics in SmartCloud Virtual Storage Center can discover this and create a pattern of sorts that has this volume living in a hybrid pool of tier-1 and flash storage during month end and on tier-2 storage the rest of the month. In preparation for month end, the volume can be transparently staged into the hybrid pool (we call it an EasyTier pool), at which point more real-time analytics take over identifying which blocks inside the database are being most accessed. Only these are actually staged into flash leaving the lesser utilized blocks on tier-1 spinning disks. Can you see the efficiency?
So where are we?
SDS v1.0 delivered. Software that offers a full suite of services and federates physical infrastructure.
SDS v2.0 delivered. A service catalog with a programmable means of accessing services, a portal and SDE cloud orchestration integration. Metering for usage-based chargeback and management information with analytics.
Where do we go from here? At IBM we’re busy opening up the Storwize family software platform for industry innovation, helping VSC become even more aware of application patterns, and progressing the notion of cognitive and analytic driven decision making in SDS. Watch this space!
Users of IBM SDS speak
More than just theory and a point of view, IBM SDS is helping real customers. At the recent IBM Edge conference there were over 75 client testimonials shared, many of them about the benefits realized from using IBM SDS. I covered several of them in my post on Edge Day 2.
One of the coolest stories came earlier in the year at the IBM Pulse conference from IBMs internal IT operations. IBMs CIO manages 100 petabytes of data and by leveraging SmartCloud Virtual Storage Center was able to reduce costs by 50% with no impact to performance.
Did this help clarify IBM’s position in SDS?
Program Director, Software Defined Environments