To understand the need for “sense-and-respond” optimization in the future, let’s take a look at how workloads are done today. Traditionally, workloads have been few in number therefore people are able to get hardware, configure it, put an operating system, a middleware, an application stack on top of it, manually optimize it, and get optimization benefits. This approach works when well enough when workloads are not dynamically changing and are small in number. But with this approach, we can’t get the benefits of agility which is a key underlying future need of today’s infrastructures. As a result of forces driving the New Era of Computing, the situation has changed. There are number of new kinds of workloads, for instance, the mobile platforms –people on the go are able to access much information. These kinds of mobile platforms are changing the world of IT and the same thing is happening with social business when people like to stay connected, being able to analyze the data and derive relationship and use that for understanding client preferences. These are the new kinds of trends that are changing the front end of IT, which is sometimes referred to as “new systems of engagement”.
While the systems of engagement drive a new front end to IT, there is still traditional infrastructure, especially at the back end (“systems of record”), where traditional workloads are hosted. Cloud is changing these scenarios because these kinds of new workloads like mobile platforms and social business are not just deployed on cloud but are also developed on the cloud. These new workloads can take advantage of the cloud because it provides agility. However, these workloads when we connect them to the traditional infrastructures there is still a need for these kinds of system of engagements connected to system of records. They together need to have the same kind of optimization that we used to have in the traditional workloads without giving up the agility. So a simple way of saying is that while the cloud today is focused on agility, the traditional systems of yesterday focused only on optimization. But how do we get both? Because we cannot give up on the other. In a Software Defined Environment (SDE), big changes are taking place in the underlying hardware. That’s where; the Software Defined Compute (SDC) comes in, and provides the opportunity to address agility and optimization.
In a Software Defined Compute environment, one of the most important aspects is we cannot afford to do workloads, support the workloads, and optimize the way it was done before because manually it will not work. Therefore, it needs to have a very significant level of automation. In traditional environments, workloads are managed bottom-top from stack perspective which takes significant effort and time for set up, configuration and workload optimization. In a Software Defined Compute environment, a top-down approach is used that brings high degree of optimization, agility and flexibility while automating the entire infrastructure. The top-down approach of Software Defined Compute understands the requirements of workloads for supporting business processes, what are the components needed at the middleware, application level and how those can be mapped down to the set of hardware resources. In SDC environments, what can potentially give the value of optimization, agility and a top-down approach is – to take the workload components, use patterns to figure out what are the best practices to configure the components of middleware applications that we need to map on the virtual resources. These patterns are used to describe both the software and infrastructure components, relationships,policies and services levels used to provision and manage resources for a business service. This brings an integrated-converged environment for integrated servers, storage and networks.
Today’s Compute is in general build on top of homogeneous resource pools. Workload components are typically defined as pre-composed VMs and workloads are manually placed within those homogeneous resource pools. Agility at this level is accomplished by providing elasticity and autoscaling based on observed performance characteristics, for instance deploying more web-servers when load increases. Software Defined Compute under SDE will transition the current state of this preconfigured simple workload deployment model by mapping the discovered or specified workload requirements towards automatic selection, scheduling and optimization of heterogeneous resources. Another underlying aspect of SDC is that it is built on open architecture which is through OpenStack, an open source as well as the industry-based consortium. The Open source architecture benefits customers because instead of one vendor developing everything we can leverage what is being done in the industry and get that faster for providing customer value. The potential of Software Defined Compute is vast. Join the conversation with us on Twitter @IBMSDE.
IBM Fellow & Vice President
STG Development Lab – India & South Asia