I am writing this blog as one of the parts of a series where I am exploring open cloud-inspired approaches such as OpenStack , DevOps and open standards. The idea is to look at how these approaches can enable IT to adapt to the shift to user and customer engagement via social and mobile applications. In my earlier post, I looked at the tectonic impact that social and mobile applications, (explained by Geoffrey Moore and others as systems of engagement ) are having on the IT infrastructures and organizations. Moreover, I looked at how the requirement for agility in delivering these applications is putting pressure on IT operations and developers in many of the organizations I work with.
In this post, I will talk about the top four innovations that I believe are key to IT organizations to successfully utilize IaaS to deliver application services in this new landscape and to address the opposing objectives of operations and developers. Let’s take a look:
1. Management of pets and cattle. Today, the approach that many IT departments use to run applications and systems is financially unsustainable. An analogy I like, is thinking of IT systems as pets rather than cattle . Pets are treated with care and nursed back to health if ill—an approach that is applicable for customer resource management (CRM), enterprise resource planning (ERP), database systems of record and applications where data protection,... [Continue Reading]
Leading technology innovation happens in multiple environments from university research labs to Silicon Valley startup’s to, customers who push the limits of technology to adapt to their business models, and even the government. We need to congratulate National Institute of Standards and Technology (NIST) for taking an early leadership position in standardizing the definitions around cloud computing as the technology was making inroads into US Federal Government , working in collaboration with the industry. IBM is an active participant in defining and driving private and hybrid cloud standards adoption and evolving the NIST definition into an implementable reference architecture that not only considers - what and why of cloud, but also “how” operational integration with existing enterprise systems aligned to Information Technology Infrastructure Library (ITIL) and IT Service Management (ITSM) process.
IBM constantly evolves and refines the Cloud Computing Reference Architecture (CCRA) based on the changing regulatory and compliance needs (based on the solid security and privacy frameworks). IBM Cloud Computing Reference Architecture is intended to be used as a blueprint / guide for architecting cloud implementations, driven by functional and non-functional requirements of the respective cloud implementation. The CCRA defines the... [Continue Reading]
Today, organizations likely face the same challenges as many of our large complex accounts. Specifically, they would like to be in a position to anticipate market changes and shifts in customer sentiments or preferences while continuing to not only outpace the competition, but also disruptions in their space.
Companies employ strategies to deliver business value by leveraging the following technologies to engage customers:
Mobile – MDM and MADP (Mobile Device Management and Mobile Application Development Platform)
Big data – including NoSQL, which is sometimes referred to as not just SQL
The goal is to access applications and data from anywhere, globally. No matter the size of the enterprise, companies want to be nimble (if not the most nimble, at least nimble enough to be able to quickly respond to global business trends as they develop).
To do this, organizations need to tap into vast amounts of both structured and unstructured data to provide a competitive edge. The ability to instantly access information at the right time to make effective decisions means that organizations need to be able to manage larger volumes and greater variety of data at a velocity that allows them to stay ahead of trends. The goal is to move beyond intuition and instinct to gather and act upon information of all types (volume and variety), as... [Continue Reading]
I am writing this blog post as one of the series of articles on my previous post: Re-envisioning enterprise IT in the era of mobile, social with open cloud . My first blog introduced the scope of these posts looking at some of the challenges faced and the potential solutions discussed in this series of blogs.
In that first post I observed it is estimated that 40 percent of all IT spending is now outside the IT department. If IT does not change, then there is a real potential they will go the way of the Dodo and become extinct.
So what is holding IT back from changing and delivering the agility, flexibility and lower costs that users are looking for?
My assertion is that one of the handicaps facing IT today is the “contract” of behaviors and expectations that have built up between IT and the business. It needs resetting, but what is this contract of expectations? Here are a few of my views.
The near universal use of project-based funding for application delivery has a perverse effect on how IT invests and handles the whole life management of applications and business services.
As IT’s first focus is typically on delivery and operation, my observation is that the tools, procedures and culture are not in place to allow for change over the course of the life of an application and its supporting... [Continue Reading]
IBM Power Systems deliver advantages that are unique in the industry and provide accelerated innovation for cloud. Whether it’s private cloud, public cloud or a hybrid cloud solution, Power Systems offer a flexible, open, and powerful platform for cloud workloads. Here are five effective Power Systems’ advantages for the cloud:
1) Exceptional Reliability, Availability and Serviceability (RAS) – and performance
Reliability and availability are critical for workloads delivered through the cloud. In Power Systems mid-range and high-end systems, we see mean time between failures in the range of 70 to 100 years. This equates to 99.997 percent availability. Power Systems also have features to help manage virtual machine availability and elasticity such as Live Partition Mobility and dynamic resource allocation. Moreover, with the latest announcement of POWER8 systems, Power Systems have upped the performance customers can get from scale-out servers built on POWER8 technology.
2) Leadership virtualization
Power Systems with PowerVM have one of the industry’s most resilient and flexible hypervisors, supporting virtual machines (VMs) running in as small as one-twentieth of a core or up to 256 cores. PowerVM provides extraordinary VM isolation. High density and high virtualization help lower total cost of ownership and simplify management with... [Continue Reading]
All in the Hadoop world are excited about YARN . For those who don’t follow such topics, YARN is an acronym for “yet another resource negotiator” . YARN is an important development for organizations deploying Hadoop environments.
What YARN does is essentially de-couple Hadoop workload management from resource management. This means that multiple applications can share a common infrastructure pool. While this idea is not new to many of us, it is new to Hadoop. Earlier versions of Hadoop consolidated both workload and resource management functions into a single JobTracker. This approach resulted in limitations for customers hoping to run multiple applications on the same cluster infrastructure.
Open source Hadoop 2.2.0 and later incorporate generally available versions of YARN. The community delivered the GA release in Hadoop 2.2.0 in October 2013, and major providers of Hadoop including IBM are at various stages of incorporating YARN into commercial Hadoop offerings.
Yet another resource negotiator
YARN is well named. While an important technology, the world is not suffering from a shortage of resource managers. Some Hadoop providers (including IBM) are supporting YARN while others are supporting Apache Mesos . In addition, there is a plethora of general purpose batch workload managers supporting Hadoop as “yet another workload pattern” (YAWP – you... [Continue Reading]
Among tech topics that generated most buzz at the recently concluded Red Hat Summit in San Francisco - cloud , software defined infrastructures and open source stood out. Leading experts in the industry shared valuable insights on the vast opportunity, business value, and competitive advantages of these technologies.
In one of the discussions, Scott Firth , Director - IBM Software Defined Environments (SDE), delivered insights on the many facets of cloud, software defined and open source including their respective value propositions, implications on IT infrastructure as well as IBM’s next move around these technologies. The discussion was led by SiliconANGLE’s John Furrier and Wikibon’s Stu Miniman inside theCUBE from the floor of Red Hat Summit 2014. Here are some of the key excerpts of the conversation:
♦ The discussion started with Scott’s comments on the IBM’s strategic decision to invest in Linux back in 1999 when it was still in its infancy stage and IBM’s outlook on open source technologies today
♦ Scott (with IBM for more than 30 years) emphasized some highlights of the long-standing IBM-Red Hat alliance, starting with solutions for Linux applications running on thousands of Linux Virtual Machines on the mainframe, to performing data analytics on Power Systems and Intel-based systems.
♦ On the cloud and open source front, Scott... [Continue Reading]
The vision of the software defined infrastructure (SDI) is to deliver virtualized capabilities to the entire set of resources required by the application so they can be deployed automatically and quickly with little to no human intervention. Storage is one the major building blocks in accomplishing the software defined infrastructure vision. In order to achieve the SDI vision, it’s critical that storage hardware and software architectures must adapt effectively so that storage can be provisioned and responsive to the dynamically changing requirements of the SDI. The flash technology is positioned as a key enabler for these new storage architectures and with the right combination of hardware and software; facilitates efficient, cost-effective and high-performance storage services delivery. Flash-based storage improves I/O performance and efficiency for many applications like database acceleration, server & desktop virtualization and cloud environments. Flash storage has become a way to compress data, reduce power, and increase performance making it a superior enabler of virtualization and a perfect fit for the SDI vision.
Recognizing the growing importance of flash in a software defined infrastructure, IBM is offering end-to-end technical education sessions on flash technologies at Edge 2014 from May 19-23 at Venetian, Las Vegas.
At Edge 2014 – the premier event for infrastructure innovation,... [Continue Reading]
Many organizations are wrestling with the economics of cloud computing . This is especially true in High Performance Computing (HPC) and analytics where applications often demand clustered, scaled-out infrastructure. These types of workloads are often “spiky” or unpredictable and the costs associated with infrastructure can be substantial.
As a few examples:
A life sciences firm may need compute capacity only at particular stages in the drug development lifecycle
An engineering firm’s workload may vary depending on their active contract portfolio or the specific nature their projects
An insurance firm may require large amounts of computing power to meet regulatory reporting obligations but only for brief periods at month or quarter end
Provisioning infrastructure to meet periodic peaks is costly. Ideas like peak-shaving, out-sourcing and hybrid clouds are not new but organizations seeking to leverage public Infrastructure-as-a-Service (IaaS) offerings can run into a variety of technical and business challenges.
How to guarantee quality-of-service (QoS) in multitenant environments
Data management and security
How to manage, meter and throttle the usage of variable cost resources
How to manage commercial software licenses
How to ensure that local assets are fully utilized before tapping assets in the cloud
These business... [Continue Reading]
Your organization might have deployed a cluster or grid on site. But can these resources always meet your peak demands? For example, what happens when several large projects move into the same simulation and design phase at the same time?
Simply adding hardware to address peak workload requirements, especially if they are short term, is probably not an option. Expanding the physical infrastructure can require significant time, expertise and budget. And the data center may already be maxed out on power, cooling and real estate. What’s the answer?
To address these challenges, at Pulse 2014, IBM announced the IBM Platform Computing Cloud Service , which provides ready-to-run clusters in the SoftLayer cloud that are optimized for compute-intensive technical computing and analytics applications. The Cloud Service comes complete with Platform LSF (SaaS) and Platform Symphony (SaaS) workload management software, dedicated physical machines and the support of the Platform Computing Cloud Operations team.
Organizations that have on-site clusters or grids can quickly address spikes in infrastructure demand by implementing a hybrid cloud. Platform Computing Cloud Service enables these organizations to forward workloads from local infrastructure to a Platform LSF or Platform Symphony cluster in the SoftLayer cloud, quickly accommodating demand without being concerned about security or... [Continue Reading]