Defining Software Defined
IBM Software Defined 2700052JD4 Visits (1028)
It seems that almost everywhere the rush to “Cloud” and programmable infrastructure has generated a number of conversations around Software Defined ... Software Defined Datacetners (SDDC), Software Defined Compute (SDC), Software Defined Storage (SDS), Software Defined Networking (SDN), Software Defined Infrastructure (SDI) to name the predominant references. So many companies, consultants, etc. have started using the terminology but actually mean different things. So, what does IBM mean when we talk about Software Defined?
Software Defined Environments 1.0
To put this in perspective, consider that the IT industry is continuously on a transformational journey. The most recent transformation has been virtualization across all infrastructure platforms and elements. Virtualization started with Compute to better utilize compute resources which generated better ROI on compute and software investments. The ability to now start creating infrastructure elements like VMs, network interfaces, and storage devices were only an programming call away.As virtualization became more ubiquitous operational automation became possible.We refer to this as Software Defined Environments 1.0.
SDE 1.0 is centered primarily around virtualization and programmatic exploitation of those resources in the data center. Many if the provisioning operations in the data center have already been automated and can be easily converted to Software Defined Compute.
Software installs are automated as well as bundling of operating systems, middleware and databases into images. The "image" has become the container for software elements. Next, images become instances. The proliferation of instances created new challenges like instance sprawl as well as image proliferation. What has simplified deployment has complicated management of new images and instances. The scope of management was no longer about servers, it was managing hypervisors, virtual machines, image catalogs, etc. The industry has spent some time getting to SDE 1.0 and today about 60% of all datacenter compute is
SDE 1.0 has become prolific in almost all datacenters worldwide. In fact, even though there are still large deployments of "bare metal" servers, the new deployments are almost all virtualized.
Software Defined Environments 2.0
Once we had the ability to virtualize compute, and storage to varying degrees, the industry moved to phase 2.0 which is cross-domain integration, orchestration and coordination. By cross-domain Integration I mean that deployments were being thought of as more than just VMs. It was more about the collection of VMs that made up the workload. This could include domains like load balancers, application servers as well as database nodes. Developers and infrastructure started exploring ways to deploy whole units of work. Of course, deploying collections of servers requires some level of orchestration in order to configure and deploy not just a single VM instance, but many. Also, some of the information needed to complete the orchestration aren't known until the workload is being deployed. IP addresses, user credentials as well as other security information such as certificates are generally installed as part of the deployment process. All of these activities need to be orchestrated, in many instances across VMs. This level has seen many solutions.
Chef from Opscode and Puppet from PuppetLabs are leading solutions orchestrating many of these activities. Currently, the industry lacks standards in this space but grassroots efforts and open source implementations are quickly forming to fill this gap. The challenge is that without a "standard" a user of Cloud services find themselves locked into a specific solution; making movement from one solution to another difficult.
SDE 2.0 is where the majority of the industry is today. Virtualization of compute is well understood and orchestration is mostly a well-understood set of activities in the community.
The many benefits of SDE 2.0 are obvious.
Even though we have made a lot of progress there are many challenges to be worked and incubated to move to SDE 3.0. Some of these include:
The industry is still working out the kinks of Software Defined 2.0 but some are looking out past the horizon to SDE 3.0.
Software Defined 3.0
Software Defined 3.0 is the ability for workload developers to capture their workloads as patterns that can be reused again and again. This also includes the ability to begin specifying Service Level Objectives (SLOs) for the workloads themselves. SLOs are identifiable metrics that the Software Defined Infrastructure can manage to. Things like I/O Per Second for a given storage device, amount of guaranteed bandwidth, policies around how data should be secured at rest, firewall rules for the workload and perhaps even Network Function Virtualization around intrusion detection and actions. The possibilities are almost endless, but, in order to deliver this kind of workload definition and value there needs to be a way to describe these workloads. IBM believes this way should be open, extensible and provide for the largest distribution of a workload definition. We believe the means of achieving those goals lie in defining the workloads via OpenStack HEAT.
In addition to SLOs users can also specify policies for their workload. These include such things as firewall rules for a workload. Encrypting data in specific storage volumes so that data remains encrypted while at rest. Specification of intrusion detection services as well as intrusion detection services. These policies can easily be applied without the developer, or workload creator, having to spend many tedious hours setting up reusable policies.
Software Defined is the way workloads and systems will be deployed in the future giving unprecedented agility and flexibility to workload deployers. They will improve application deployment fidelity through the use of patterns that capture best practices and can be reused across teams within an organization. Underlying all of this is the delivery of this capability through open standards and open source. OpenStack is the reference architecture and implementation that will make all of this possible. OpenStack is to Infrastructure what the Java Community Process was to Java Enterprise Edition.
Software Defined means that infrastructure elements can be created, manipulated and used through a set of common programming APIs. These APIs and behavioural model are derived from open standards and not a single vendor's set of APIs or worldview.
Join the conversation at #ibmSDE.