As an IBM Cloud Architect for the past 6 years I have focused on the development, delivery and the maturity of cloud computing implementation with clients, partners and service providers. Recently the focus of “Hybrid” cloud services and implementations has seen a sharp rise as a strategic initiative for many clients.
By the industry definition, hybrid cloud provides services extending your data center to off premise private and public cloud. Hybrid cloud should leverage your investment in a common infrastructure model, operational management, user experience and skills. Hybrid cloud allows you to deploy and run your applications onsite, offsite, or in both combination. Likewise, hybrid cloud model that you adopt should ensure your cloud users have no need to rewrite applications or change APIs and have consistent user experience across deployment environments on-premise and off-premise.
Taking a closer look at off-premise cloud models, many enterprises concerned with control points, security and isolation consider using a dedicated private cloud tied into their on-premise data center. This environment provides single-tenant, isolated compute resources, and administration control. This can be combined with additional managed services provided through a cloud service provider or self-managed directly by the enterprise tenant. A dedicated private cloud can be ideal for DevOps and production workloads. Another common model is a... [Continue Reading]
In my first post , I discussed how combining Software Defined Environments (SDE) and deployment automation reduces application delivery time and increases agility. In this post, I look at these capabilities in greater depth.
In summary they deliver the following benefits:
SDE combines OpenStack-based Software Defined Infrastructure (SDI) with application patterns to repeatably and reliably create the application environments for each stage of the delivery pipeline.
Deployment automation stream-lines deployment of applications into development, test and production environments via automation and elimination of manual tasks.
This SDE enabled approach to application delivery is illustrated in the figure reproduced here.
Deployment automation and application lifecycle
Deployment automation is central to enabling IT organizations to accelerate delivery by the elimination of manual tasks. As shown above, it automates environment creation via the SDE layer and perform application deployment, along with component tracking and versioning.
These tools also manage the configuration of each SDE environment, database and application component, ensuring repeatable and consistent delivery. This is an end-to-end solution from test environments, through to production. The approach tests the deployment and configuration process as much as the application code itself, eliminating... [Continue Reading]
I am writing this blog as one of the parts of a series where I am exploring open cloud-inspired approaches such as OpenStack , DevOps and open standards. The idea is to look at how these approaches can enable IT to adapt to the shift to user and customer engagement via social and mobile applications. In my earlier post, I looked at the tectonic impact that social and mobile applications, (explained by Geoffrey Moore and others as systems of engagement ) are having on the IT infrastructures and organizations. Moreover, I looked at how the requirement for agility in delivering these applications is putting pressure on IT operations and developers in many of the organizations I work with.
In this post, I will talk about the top four innovations that I believe are key to IT organizations to successfully utilize IaaS to deliver application services in this new landscape and to address the opposing objectives of operations and developers. Let’s take a look:
1. Management of pets and cattle. Today, the approach that many IT departments use to run applications and systems is financially unsustainable. An analogy I like, is thinking of IT systems as pets rather than cattle . Pets are treated with care and nursed back to health if ill—an approach that is applicable for customer resource management (CRM), enterprise resource planning (ERP), database systems of record and applications where data protection,... [Continue Reading]
Leading technology innovation happens in multiple environments from university research labs to Silicon Valley startup’s to, customers who push the limits of technology to adapt to their business models, and even the government. We need to congratulate National Institute of Standards and Technology (NIST) for taking an early leadership position in standardizing the definitions around cloud computing as the technology was making inroads into US Federal Government , working in collaboration with the industry. IBM is an active participant in defining and driving private and hybrid cloud standards adoption and evolving the NIST definition into an implementable reference architecture that not only considers - what and why of cloud, but also “how” operational integration with existing enterprise systems aligned to Information Technology Infrastructure Library (ITIL) and IT Service Management (ITSM) process.
IBM constantly evolves and refines the Cloud Computing Reference Architecture (CCRA) based on the changing regulatory and compliance needs (based on the solid security and privacy frameworks). IBM Cloud Computing Reference Architecture is intended to be used as a blueprint / guide for architecting cloud implementations, driven by functional and non-functional requirements of the respective cloud implementation. The CCRA defines the... [Continue Reading]
Today, organizations likely face the same challenges as many of our large complex accounts. Specifically, they would like to be in a position to anticipate market changes and shifts in customer sentiments or preferences while continuing to not only outpace the competition, but also disruptions in their space.
Companies employ strategies to deliver business value by leveraging the following technologies to engage customers:
Mobile – MDM and MADP (Mobile Device Management and Mobile Application Development Platform)
Big data – including NoSQL, which is sometimes referred to as not just SQL
The goal is to access applications and data from anywhere, globally. No matter the size of the enterprise, companies want to be nimble (if not the most nimble, at least nimble enough to be able to quickly respond to global business trends as they develop).
To do this, organizations need to tap into vast amounts of both structured and unstructured data to provide a competitive edge. The ability to instantly access information at the right time to make effective decisions means that organizations need to be able to manage larger volumes and greater variety of data at a velocity that allows them to stay ahead of trends. The goal is to move beyond intuition and instinct to gather and act upon information of all types (volume and variety), as... [Continue Reading]
I am writing this blog post as one of the series of articles on my previous post: Re-envisioning enterprise IT in the era of mobile, social with open cloud . My first blog introduced the scope of these posts looking at some of the challenges faced and the potential solutions discussed in this series of blogs.
In that first post I observed it is estimated that 40 percent of all IT spending is now outside the IT department. If IT does not change, then there is a real potential they will go the way of the Dodo and become extinct.
So what is holding IT back from changing and delivering the agility, flexibility and lower costs that users are looking for?
My assertion is that one of the handicaps facing IT today is the “contract” of behaviors and expectations that have built up between IT and the business. It needs resetting, but what is this contract of expectations? Here are a few of my views.
The near universal use of project-based funding for application delivery has a perverse effect on how IT invests and handles the whole life management of applications and business services.
As IT’s first focus is typically on delivery and operation, my observation is that the tools, procedures and culture are not in place to allow for change over the course of the life of an application and its supporting... [Continue Reading]
IBM Power Systems deliver advantages that are unique in the industry and provide accelerated innovation for cloud. Whether it’s private cloud, public cloud or a hybrid cloud solution, Power Systems offer a flexible, open, and powerful platform for cloud workloads. Here are five effective Power Systems’ advantages for the cloud:
1) Exceptional Reliability, Availability and Serviceability (RAS) – and performance
Reliability and availability are critical for workloads delivered through the cloud. In Power Systems mid-range and high-end systems, we see mean time between failures in the range of 70 to 100 years. This equates to 99.997 percent availability. Power Systems also have features to help manage virtual machine availability and elasticity such as Live Partition Mobility and dynamic resource allocation. Moreover, with the latest announcement of POWER8 systems, Power Systems have upped the performance customers can get from scale-out servers built on POWER8 technology.
2) Leadership virtualization
Power Systems with PowerVM have one of the industry’s most resilient and flexible hypervisors, supporting virtual machines (VMs) running in as small as one-twentieth of a core or up to 256 cores. PowerVM provides extraordinary VM isolation. High density and high virtualization help lower total cost of ownership and simplify management with... [Continue Reading]
All in the Hadoop world are excited about YARN . For those who don’t follow such topics, YARN is an acronym for “yet another resource negotiator” . YARN is an important development for organizations deploying Hadoop environments.
What YARN does is essentially de-couple Hadoop workload management from resource management. This means that multiple applications can share a common infrastructure pool. While this idea is not new to many of us, it is new to Hadoop. Earlier versions of Hadoop consolidated both workload and resource management functions into a single JobTracker. This approach resulted in limitations for customers hoping to run multiple applications on the same cluster infrastructure.
Open source Hadoop 2.2.0 and later incorporate generally available versions of YARN. The community delivered the GA release in Hadoop 2.2.0 in October 2013, and major providers of Hadoop including IBM are at various stages of incorporating YARN into commercial Hadoop offerings.
Yet another resource negotiator
YARN is well named. While an important technology, the world is not suffering from a shortage of resource managers. Some Hadoop providers (including IBM) are supporting YARN while others are supporting Apache Mesos . In addition, there is a plethora of general purpose batch workload managers supporting Hadoop as “yet another workload pattern” (YAWP – you... [Continue Reading]
Among tech topics that generated most buzz at the recently concluded Red Hat Summit in San Francisco - cloud , software defined infrastructures and open source stood out. Leading experts in the industry shared valuable insights on the vast opportunity, business value, and competitive advantages of these technologies.
In one of the discussions, Scott Firth , Director - IBM Software Defined Environments (SDE), delivered insights on the many facets of cloud, software defined and open source including their respective value propositions, implications on IT infrastructure as well as IBM’s next move around these technologies. The discussion was led by SiliconANGLE’s John Furrier and Wikibon’s Stu Miniman inside theCUBE from the floor of Red Hat Summit 2014. Here are some of the key excerpts of the conversation:
♦ The discussion started with Scott’s comments on the IBM’s strategic decision to invest in Linux back in 1999 when it was still in its infancy stage and IBM’s outlook on open source technologies today
♦ Scott (with IBM for more than 30 years) emphasized some highlights of the long-standing IBM-Red Hat alliance, starting with solutions for Linux applications running on thousands of Linux Virtual Machines on the mainframe, to performing data analytics on Power Systems and Intel-based systems.
♦ On the cloud and open source front, Scott... [Continue Reading]
The vision of the software defined infrastructure (SDI) is to deliver virtualized capabilities to the entire set of resources required by the application so they can be deployed automatically and quickly with little to no human intervention. Storage is one the major building blocks in accomplishing the software defined infrastructure vision. In order to achieve the SDI vision, it’s critical that storage hardware and software architectures must adapt effectively so that storage can be provisioned and responsive to the dynamically changing requirements of the SDI. The flash technology is positioned as a key enabler for these new storage architectures and with the right combination of hardware and software; facilitates efficient, cost-effective and high-performance storage services delivery. Flash-based storage improves I/O performance and efficiency for many applications like database acceleration, server & desktop virtualization and cloud environments. Flash storage has become a way to compress data, reduce power, and increase performance making it a superior enabler of virtualization and a perfect fit for the SDI vision.
Recognizing the growing importance of flash in a software defined infrastructure, IBM is offering end-to-end technical education sessions on flash technologies at Edge 2014 from May 19-23 at Venetian, Las Vegas.
At Edge 2014 – the premier event for infrastructure innovation,... [Continue Reading]