Forbes Magazine recently published the article “ Software That Tells You What It Needs ” by Roger Kay that describes the evolution of virtualization and the emergence of software-defined environments as a gateway to the future . As I read “ Software That Tells You What It Needs ” three reasons became apparent why your software should tell you what it needs, and why you might want to listen.
Increase responsiveness and agility: To keep pace with the rapidly changing business environment driven by social business , mobile and big data requires software applications to define the resources needed, and for the infrastructure to respond quickly. The Forbes article uses a couple of application examples to describe why clients need to evolve IT infrastructures beyond basic virtualization . Roger discusses the importance of a business critical application as fraud detection identifying a sudden spike in activity, which should drive an immediate reallocation of storage resources to capture and track fraud related data. A software-defined infrastructure, in Roger’s example, enables clients to become more responsive and agile by proactively and efficiently responding to changing business conditions. Many enterprises are challenged when integrating existing IT infrastructures with business processes because servers, storage or network devices allocated to an application, job or department cannot easily or readily be reassigned,... [Continue Reading]
As an IBM Cloud Architect for the past 6 years I have focused on the development, delivery and the maturity of cloud computing implementation with clients, partners and service providers. Recently the focus of “Hybrid” cloud services and implementations has seen a sharp rise as a strategic initiative for many clients.
By the industry definition, hybrid cloud provides services extending your data center to off premise private and public cloud. Hybrid cloud should leverage your investment in a common infrastructure model, operational management, user experience and skills. Hybrid cloud allows you to deploy and run your applications onsite, offsite, or in both combination. Likewise, hybrid cloud model that you adopt should ensure your cloud users have no need to rewrite applications or change APIs and have consistent user experience across deployment environments on-premise and off-premise.
Taking a closer look at off-premise cloud models, many enterprises concerned with control points, security and isolation consider using a dedicated private cloud tied into their on-premise data center. This environment provides single-tenant, isolated compute resources, and administration control. This can be combined with additional managed services provided through a cloud service provider or self-managed directly by the enterprise tenant. A dedicated private cloud can be ideal for DevOps and production workloads. Another common model is a... [Continue Reading]
In my first post , I discussed how combining Software Defined Environments (SDE) and deployment automation reduces application delivery time and increases agility. In this post, I look at these capabilities in greater depth.
In summary they deliver the following benefits:
SDE combines OpenStack-based Software Defined Infrastructure (SDI) with application patterns to repeatably and reliably create the application environments for each stage of the delivery pipeline.
Deployment automation stream-lines deployment of applications into development, test and production environments via automation and elimination of manual tasks.
This SDE enabled approach to application delivery is illustrated in the figure reproduced here.
Deployment automation and application lifecycle
Deployment automation is central to enabling IT organizations to accelerate delivery by the elimination of manual tasks. As shown above, it automates environment creation via the SDE layer and perform application deployment, along with component tracking and versioning.
These tools also manage the configuration of each SDE environment, database and application component, ensuring repeatable and consistent delivery. This is an end-to-end solution from test environments, through to production. The approach tests the deployment and configuration process as much as the application code itself, eliminating... [Continue Reading]
I am writing this blog as one of the parts of a series where I am exploring open cloud-inspired approaches such as OpenStack , DevOps and open standards. The idea is to look at how these approaches can enable IT to adapt to the shift to user and customer engagement via social and mobile applications. In my earlier post, I looked at the tectonic impact that social and mobile applications, (explained by Geoffrey Moore and others as systems of engagement ) are having on the IT infrastructures and organizations. Moreover, I looked at how the requirement for agility in delivering these applications is putting pressure on IT operations and developers in many of the organizations I work with.
In this post, I will talk about the top four innovations that I believe are key to IT organizations to successfully utilize IaaS to deliver application services in this new landscape and to address the opposing objectives of operations and developers. Let’s take a look:
1. Management of pets and cattle. Today, the approach that many IT departments use to run applications and systems is financially unsustainable. An analogy I like, is thinking of IT systems as pets rather than cattle . Pets are treated with care and nursed back to health if ill—an approach that is applicable for customer resource management (CRM), enterprise resource planning (ERP), database systems of record and applications where data protection,... [Continue Reading]
Leading technology innovation happens in multiple environments from university research labs to Silicon Valley startup’s to, customers who push the limits of technology to adapt to their business models, and even the government. We need to congratulate National Institute of Standards and Technology (NIST) for taking an early leadership position in standardizing the definitions around cloud computing as the technology was making inroads into US Federal Government , working in collaboration with the industry. IBM is an active participant in defining and driving private and hybrid cloud standards adoption and evolving the NIST definition into an implementable reference architecture that not only considers - what and why of cloud, but also “how” operational integration with existing enterprise systems aligned to Information Technology Infrastructure Library (ITIL) and IT Service Management (ITSM) process.
IBM constantly evolves and refines the Cloud Computing Reference Architecture (CCRA) based on the changing regulatory and compliance needs (based on the solid security and privacy frameworks). IBM Cloud Computing Reference Architecture is intended to be used as a blueprint / guide for architecting cloud implementations, driven by functional and non-functional requirements of the respective cloud implementation. The CCRA defines the... [Continue Reading]
Today, organizations likely face the same challenges as many of our large complex accounts. Specifically, they would like to be in a position to anticipate market changes and shifts in customer sentiments or preferences while continuing to not only outpace the competition, but also disruptions in their space.
Companies employ strategies to deliver business value by leveraging the following technologies to engage customers:
Mobile – MDM and MADP (Mobile Device Management and Mobile Application Development Platform)
Big data – including NoSQL, which is sometimes referred to as not just SQL
The goal is to access applications and data from anywhere, globally. No matter the size of the enterprise, companies want to be nimble (if not the most nimble, at least nimble enough to be able to quickly respond to global business trends as they develop).
To do this, organizations need to tap into vast amounts of both structured and unstructured data to provide a competitive edge. The ability to instantly access information at the right time to make effective decisions means that organizations need to be able to manage larger volumes and greater variety of data at a velocity that allows them to stay ahead of trends. The goal is to move beyond intuition and instinct to gather and act upon information of all types (volume and variety), as... [Continue Reading]
I am writing this blog post as one of the series of articles on my previous post: Re-envisioning enterprise IT in the era of mobile, social with open cloud . My first blog introduced the scope of these posts looking at some of the challenges faced and the potential solutions discussed in this series of blogs.
In that first post I observed it is estimated that 40 percent of all IT spending is now outside the IT department. If IT does not change, then there is a real potential they will go the way of the Dodo and become extinct.
So what is holding IT back from changing and delivering the agility, flexibility and lower costs that users are looking for?
My assertion is that one of the handicaps facing IT today is the “contract” of behaviors and expectations that have built up between IT and the business. It needs resetting, but what is this contract of expectations? Here are a few of my views.
The near universal use of project-based funding for application delivery has a perverse effect on how IT invests and handles the whole life management of applications and business services.
As IT’s first focus is typically on delivery and operation, my observation is that the tools, procedures and culture are not in place to allow for change over the course of the life of an application and its supporting... [Continue Reading]
IBM Power Systems deliver advantages that are unique in the industry and provide accelerated innovation for cloud. Whether it’s private cloud, public cloud or a hybrid cloud solution, Power Systems offer a flexible, open, and powerful platform for cloud workloads. Here are five effective Power Systems’ advantages for the cloud:
1) Exceptional Reliability, Availability and Serviceability (RAS) – and performance
Reliability and availability are critical for workloads delivered through the cloud. In Power Systems mid-range and high-end systems, we see mean time between failures in the range of 70 to 100 years. This equates to 99.997 percent availability. Power Systems also have features to help manage virtual machine availability and elasticity such as Live Partition Mobility and dynamic resource allocation. Moreover, with the latest announcement of POWER8 systems, Power Systems have upped the performance customers can get from scale-out servers built on POWER8 technology.
2) Leadership virtualization
Power Systems with PowerVM have one of the industry’s most resilient and flexible hypervisors, supporting virtual machines (VMs) running in as small as one-twentieth of a core or up to 256 cores. PowerVM provides extraordinary VM isolation. High density and high virtualization help lower total cost of ownership and simplify management with... [Continue Reading]
All in the Hadoop world are excited about YARN . For those who don’t follow such topics, YARN is an acronym for “yet another resource negotiator” . YARN is an important development for organizations deploying Hadoop environments.
What YARN does is essentially de-couple Hadoop workload management from resource management. This means that multiple applications can share a common infrastructure pool. While this idea is not new to many of us, it is new to Hadoop. Earlier versions of Hadoop consolidated both workload and resource management functions into a single JobTracker. This approach resulted in limitations for customers hoping to run multiple applications on the same cluster infrastructure.
Open source Hadoop 2.2.0 and later incorporate generally available versions of YARN. The community delivered the GA release in Hadoop 2.2.0 in October 2013, and major providers of Hadoop including IBM are at various stages of incorporating YARN into commercial Hadoop offerings.
Yet another resource negotiator
YARN is well named. While an important technology, the world is not suffering from a shortage of resource managers. Some Hadoop providers (including IBM) are supporting YARN while others are supporting Apache Mesos . In addition, there is a plethora of general purpose batch workload managers supporting Hadoop as “yet another workload pattern” (YAWP – you... [Continue Reading]
Among tech topics that generated most buzz at the recently concluded Red Hat Summit in San Francisco - cloud , software defined infrastructures and open source stood out. Leading experts in the industry shared valuable insights on the vast opportunity, business value, and competitive advantages of these technologies.
In one of the discussions, Scott Firth , Director - IBM Software Defined Environments (SDE), delivered insights on the many facets of cloud, software defined and open source including their respective value propositions, implications on IT infrastructure as well as IBM’s next move around these technologies. The discussion was led by SiliconANGLE’s John Furrier and Wikibon’s Stu Miniman inside theCUBE from the floor of Red Hat Summit 2014. Here are some of the key excerpts of the conversation:
♦ The discussion started with Scott’s comments on the IBM’s strategic decision to invest in Linux back in 1999 when it was still in its infancy stage and IBM’s outlook on open source technologies today
♦ Scott (with IBM for more than 30 years) emphasized some highlights of the long-standing IBM-Red Hat alliance, starting with solutions for Linux applications running on thousands of Linux Virtual Machines on the mainframe, to performing data analytics on Power Systems and Intel-based systems.
♦ On the cloud and open source front, Scott... [Continue Reading]
The vision of the software defined infrastructure (SDI) is to deliver virtualized capabilities to the entire set of resources required by the application so they can be deployed automatically and quickly with little to no human intervention. Storage is one the major building blocks in accomplishing the software defined infrastructure vision. In order to achieve the SDI vision, it’s critical that storage hardware and software architectures must adapt effectively so that storage can be provisioned and responsive to the dynamically changing requirements of the SDI. The flash technology is positioned as a key enabler for these new storage architectures and with the right combination of hardware and software; facilitates efficient, cost-effective and high-performance storage services delivery. Flash-based storage improves I/O performance and efficiency for many applications like database acceleration, server & desktop virtualization and cloud environments. Flash storage has become a way to compress data, reduce power, and increase performance making it a superior enabler of virtualization and a perfect fit for the SDI vision.
Recognizing the growing importance of flash in a software defined infrastructure, IBM is offering end-to-end technical education sessions on flash technologies at Edge 2014 from May 19-23 at Venetian, Las Vegas.
At Edge 2014 – the premier event for infrastructure innovation,... [Continue Reading]
Many organizations are wrestling with the economics of cloud computing . This is especially true in High Performance Computing (HPC) and analytics where applications often demand clustered, scaled-out infrastructure. These types of workloads are often “spiky” or unpredictable and the costs associated with infrastructure can be substantial.
As a few examples:
A life sciences firm may need compute capacity only at particular stages in the drug development lifecycle
An engineering firm’s workload may vary depending on their active contract portfolio or the specific nature their projects
An insurance firm may require large amounts of computing power to meet regulatory reporting obligations but only for brief periods at month or quarter end
Provisioning infrastructure to meet periodic peaks is costly. Ideas like peak-shaving, out-sourcing and hybrid clouds are not new but organizations seeking to leverage public Infrastructure-as-a-Service (IaaS) offerings can run into a variety of technical and business challenges.
How to guarantee quality-of-service (QoS) in multitenant environments
Data management and security
How to manage, meter and throttle the usage of variable cost resources
How to manage commercial software licenses
How to ensure that local assets are fully utilized before tapping assets in the cloud
These business... [Continue Reading]
Your organization might have deployed a cluster or grid on site. But can these resources always meet your peak demands? For example, what happens when several large projects move into the same simulation and design phase at the same time?
Simply adding hardware to address peak workload requirements, especially if they are short term, is probably not an option. Expanding the physical infrastructure can require significant time, expertise and budget. And the data center may already be maxed out on power, cooling and real estate. What’s the answer?
To address these challenges, at Pulse 2014, IBM announced the IBM Platform Computing Cloud Service , which provides ready-to-run clusters in the SoftLayer cloud that are optimized for compute-intensive technical computing and analytics applications. The Cloud Service comes complete with Platform LSF (SaaS) and Platform Symphony (SaaS) workload management software, dedicated physical machines and the support of the Platform Computing Cloud Operations team.
Organizations that have on-site clusters or grids can quickly address spikes in infrastructure demand by implementing a hybrid cloud. Platform Computing Cloud Service enables these organizations to forward workloads from local infrastructure to a Platform LSF or Platform Symphony cluster in the SoftLayer cloud, quickly accommodating demand without being concerned about security or... [Continue Reading]
Effective management and use of virtualized IT resources is a key pillar of the IBM Software Defined Environment (SDE) strategy. Of course, virtualized IT is nothing new and was invented by IBM back in the late 60s and used until today by many organization as part of Virtual Machine/370 and follow on systems. Users and applications were allocated virtual machines that gave them virtual compute, storage and even cool things like virtual printers and punches!
So what is different about the technology and the environment now that brings virtualization into the forefront of enabling a new wave of IT automation for today's demanding mobile , big data & analytics workloads?
Earlier mainframe virtualization environments and the more recent emergent UNIX and x86 virtualization solutions were based on proprietary formats and interfaces. This left anyone trying to implement an IT automation solution on top of these systems to write multiple implementations or use plugins and abstraction layers to hide the differences. Today, with OpenStack receiving wide spread acceptance as an open standard for virtual IT resource management, solution developers can develop to one interface.
In my early days as a programmer, I wrote automation programs to create and configure VM/370 virtual resources in support of diverse applications. This included carving out virtual disks and allocating... [Continue Reading]
Over the past couple of months, Software Defined Infrastructures have been quite a hot topic in the IT industry and for IBM. At IBM, we use a more global term: Software Defined Environments (SDEs). There are a number of slightly varying definitions of what SDE means – Matt Hogstrom gave a good definition in one of his recent blogs – but I want to summarize just the most important points from my perspective: SDE is all about application workloads and about using orchestration technology to provide some IT service to end users under certain qualities of services. What started out as Software Defined Compute , Network or Storage using virtualization technology grew to also include middleware and application stacks and into what we call Software Defined Environments today.
One very important aspect with SDE is that workloads have to be described as machine-readable patterns so that orchestration engines can interpret them, can instantiate the required software-defined resources, deploy the respective middleware and application workloads on-top and manage the complete workload to fulfill Service Level Objectives (SLOs). Since those patterns encapsulate quite some expert knowledge and it takes time and skills to create them, it is important to decide on the right format for encoding them. As a pattern author, it is a wise decision to stick to an open, standardized format. This will make your solution portable across... [Continue Reading]