As business evolves so is the role of the CIO (Chief Information Officer). Today’s CIO is in an ideal position to take increasing responsibility and control. The CIO is one of the important hubs of decision-making across organizations today. I spend a good amount of my time speaking to CIOs across industries, listening to what challenges they face and how those challenges have changed since the role came into its real days about 10 years ago.
As technology advancements like cloud , analytics and cognitive computing have increased their impact across today’s organizations, the role of the CIO has evolved along with it. Most of the CIOs I speak with are uniquely qualified to lead a cloud strategy; this is because of their intricate knowledge of both IT and business. While in-depth understanding of technology is critical, no less important is the ability to articulate the business impacts and opportunities of cloud computing and big data to the C-suite. CIOs should prudently evaluate all IT capabilities, organizational design, and create a roadmap for change to identify new opportunities that will deliver the best return on investment aligned with their business strategy and mission needs. Increasingly, I have observed that finding the right platform to learn is the top priority for CIOs.
The CIO Washington D.C. Summit is a unique opportunity for the greatest IT thinkers to collaborate on current industry challenges... [Continue Reading]
The latest global analysis from Synergy Research Group positions IBM among the top three providers in the cloud market for the first quarter of 2014. The leading research group, Synergy Research also acknowledged IBM as the leader in private and hybrid cloud engagements with nearly 14 percent of worldwide revenue market shares from the third quarter of 2013 through the first quarter of 2014. In addition, Synergy Research also stated that IBM ranks second behind Microsoft in terms of year-to-year growth at 80 percent, compared to the same time last year. With findings from Synergy Research’s around growth specific to Infrastructure-as-a-Service, Platform-as-a-Service, hybrid and private cloud, IBM Cloud growth is driven by extensive gains with major enterprise recent client engagements in a variety of industries. Read the complete press release here .
IBM is a global provider and innovator in end-to-end cloud computing solutions. IBM Cloud Computing has helped more than 30,000 clients around the world with 40,000 industry experts. IBM SoftLayer has served 6,000 new cloud clients since its acquisition in 2013. Today, IBM has more than 100 cloud SaaS solutions, network of 40 data centers worldwide and thousands of experts with deep industry knowledge helping clients transform. Since 2007, IBM has invested more than $7 billion in 17 acquisitions to accelerate its cloud initiatives and build a high value cloud portfolio. Visit ibm.com/cloud for... [Continue Reading]
IBM is a strong believer in the open standards approach of OpenStack for Cloud middleware, and IBM is now delivering a great new product, Cloud Manager with OpenStack 4.1 (CMO) , based on the latest OpenStack Icehouse release. CMO 4.1 enables many standards-based Icehouse capabilities, including orchestration and metering. An easy-to-deploy, simple-to-use cloud management solution, CMO 4.1 also features a self-service portal for workload provisioning in a virtualized environment:
It supports all IBM server architectures and major hypervisors.
It allows you to expand from a private cloud solution to remote capacity as needed
Without significant re-investment, you can easily move to advanced IBM cloud capabilities
With IBM Cloud Manager with OpenStack V4.1 (based upon the OpenStack project), you can take advantage of:
Simplified implementation and advanced resource management that is backed by IBM enterprise-grade lab services and support
Complete access to OpenStack APIs (Application Programming Interfaces)
Easy customization via a REST API to help you tailor the product to unique business environments
A modular, flexible design that offers rapid innovation, interoperability, and freedom from being vendor locked-in
The recent talk by Tammy Van Hove, Distinguished Engineer from IBM at the OpenStack Summit in mid-May, 2014... [Continue Reading]
Today businesses are increasingly turning to private and hybrid cloud solutions to reduce costs and enable more flexible and scalable business processes. These cloud enabled infrastructures involve complex management and operational challenges. Still, there is a better way. A new generation of Software Defined Environment (SDE) makes it possible to deliver common cloud services for compute, storage and network while supporting multiple hypervisors and multi-vendor platforms. This approach is one of the most dynamic innovations in private and hybrid clouds, which brings advanced automation and orchestration for both system management and software distribution. Moreover, it provides self-service provisioning and an accountability mechanism to allow IT to keep track of who is doing what with company resources. Many federal agencies are building this strategy that can handle the need to both optimize and innovate.
Looking at the growing demand for this technology among our federal stakeholders, IBM along with the Business of Federal Technology (FCW) is presenting a live webcast today, June 24th 2014 at 2 PM ET . I invite you to join us and see how innovations like SDE, open and other technologies in private and hybrid cloud can bring superior business value while reducing costs.
Also, I will be demonstrating real-life solutions and use cases that will help you make the most of this event as we... [Continue Reading]
IBM Edge2014 – The premier event for infrastructure innovation was a success!
More than 4,200 business and IT professionals from around the world gathered in, Las Vegas for this important IBM event. From the packed General Sessions to EdgeTalks...from WinningEdge and the Managed Service Providers (MSP) Forum...to more than 550 Technical Edge sessions… the conference served as an outstanding showcase for exciting new IT infrastructure breakthroughs and technology announcements related to Storage, x86 Systems and PureSystems.
Here’s a recap and some of the noteworthy content memorable moments and takeaways from Edge2014:
The conference kicked-off with an exclusive 2-day ExecutiveEdge event featuring a number of exciting sessions including 2 General Sessions and Edge Talks.
In EdgeTalks , the speakers focused on how innovative thinking in areas like healthcare, urban renewal and cybersecurity can result in unexpected extraordinary outcomes, while IBM Executive Host, Surjit Chana, CMO and Vice President Systems & Technology Group (STG), connected the dots and brought all the innovative ideas shared back to what it meant to the audience in terms of being creative about how they leverage IBM Infrastructure to help exploit the capabilities of evolving Cloud, Analytics, Mobile, Social and Security (CAMSS) technologies. This new Edge Talks format was a big hit... [Continue Reading]
There is a lot of hoopla out there about the cloud, especially since dozens of startups have gone public over the last year or two, and many more are in the queue. Here are what I believe to be the top five cloud predictions for the coming years:
1. More application availability on the cloud
With most new software being built for cloud from the outset, it is predicted that by 2016 over a quarter of all applications (around 48 million) will be available on the cloud (Global Technology Outlook: Cloud 2014: A More Disruptive Phase).
This makes sense when you consider that about 56 percent of enterprises consider cloud to be a strategic differentiator, and approximately 58 percent of enterprises spend more than 10 percent of their annual budgets on cloud services. The Everest Group, in their recent Enterprise Cloud Adoption Survey, further argues that cloud adoption enables operational excellence and accelerated innovation.
2. Increased growth in the market for cloud
According to Gartner, the cloud is here, and it is accelerating globally. Based on their forecast for 2011-2017 , Gartner expects adoption to hit $250 billion by 2017. In the fourth quarter of 2013, we saw this prediction supported by enterprises worldwide—enterprises that were increasingly relying on cloud to develop, market and sell products, manage supply chains and more.
In the same forecast, Gartner also suggested that the worldwide... [Continue Reading]
Software Defined Networking (SDN) is an evolving architecture that is dynamic, manageable, cost-effective and adaptable, making it ideal for the high-bandwidth, dynamic nature of today's applications. Now that SDN has become an everyday speak for many IT professionals in a Software Defined Environment , what do we really mean when we talk about SDN? What are its implications and business opportunities?
IBM Edge 2014 – The premier event for infrastructure innovation, from May 19-23 at Venetian in Las Vegas, will address many concerns about SDN and discuss end-to-end SDN architecture through exclusive expert sessions, clients’ success stories, etc. I would like to highlight some of the sessions at Edge that will outline many aspects of SDN and how IBM SDN capabilities help customers in developing, deploying, managing and maintaining a simplified, responsive and highly adaptive infrastructure :
Building Scalable, Programmable Network Fabric with SDN
Software Defined Networking (SDN) promise administrators to dynamically program networking devices using APIs. IBM Software Defined Network for Virtual Environment (SDN VE) controller with related applications allows administrators to build large scale network fabric from a central controller while using open standards such as OpenFlow, OpenStack, and OpenDaylight. This session will explain the technology behind SDN VE, unravel the... [Continue Reading]
While cloud has been clearly identified as the next step to IT optimization, essential for increased performance and cost reduction, many of us are in a haze when it comes to the fundamental security measures required.
Our fears related to cloud security , for instance, the fear of the unknown ( where is my data stored?) and the fear of the unseen (how does my data flow from one Virtual Machine to another) and the fear of how ‘secure’ the cloud really is lead us to be wary of cloud adoption. Also the new layers of infrastructure create new grey areas, requiring new security solutions that could provide virtual layer specific protection. However this blog post will focus at the fundamental hardware security.
Let’s take a look at the basic components of cloud and ways to optimize their security
Even though the cloud uses a different mechanism to serve IT infrastructure, be it computing power, memory or storage, the elements that create a cloud still include traditional datacenter components – servers, network, nodes and endpoints. The risks that exist in traditional data centers are also relevant in a cloud environment. Hence, traditional protection solutions such as Firewalls, Network Protection, Host security, etc. are... [Continue Reading]
Many organizations are wrestling with the economics of cloud computing . This is especially true in High Performance Computing (HPC) and analytics where applications often demand clustered, scaled-out infrastructure. These types of workloads are often “spiky” or unpredictable and the costs associated with infrastructure can be substantial.
As a few examples:
A life sciences firm may need compute capacity only at particular stages in the drug development lifecycle
An engineering firm’s workload may vary depending on their active contract portfolio or the specific nature their projects
An insurance firm may require large amounts of computing power to meet regulatory reporting obligations but only for brief periods at month or quarter end
Provisioning infrastructure to meet periodic peaks is costly. Ideas like peak-shaving, out-sourcing and hybrid clouds are not new but organizations seeking to leverage public Infrastructure-as-a-Service (IaaS) offerings can run into a variety of technical and business challenges.
How to guarantee quality-of-service (QoS) in multitenant environments
Data management and security
How to manage, meter and throttle the usage of variable cost resources
How to manage commercial software licenses
How to ensure that local assets are fully utilized before tapping assets in the cloud
These business... [Continue Reading]
Your organization might have deployed a cluster or grid on site. But can these resources always meet your peak demands? For example, what happens when several large projects move into the same simulation and design phase at the same time?
Simply adding hardware to address peak workload requirements, especially if they are short term, is probably not an option. Expanding the physical infrastructure can require significant time, expertise and budget. And the data center may already be maxed out on power, cooling and real estate. What’s the answer?
To address these challenges, at Pulse 2014, IBM announced the IBM Platform Computing Cloud Service , which provides ready-to-run clusters in the SoftLayer cloud that are optimized for compute-intensive technical computing and analytics applications. The Cloud Service comes complete with Platform LSF (SaaS) and Platform Symphony (SaaS) workload management software, dedicated physical machines and the support of the Platform Computing Cloud Operations team.
Organizations that have on-site clusters or grids can quickly address spikes in infrastructure demand by implementing a hybrid cloud. Platform Computing Cloud Service enables these organizations to forward workloads from local infrastructure to a Platform LSF or Platform Symphony cluster in the SoftLayer cloud, quickly accommodating demand without being concerned about security or... [Continue Reading]
Effective management and use of virtualized IT resources is a key pillar of the IBM Software Defined Environment (SDE) strategy. Of course, virtualized IT is nothing new and was invented by IBM back in the late 60s and used until today by many organization as part of Virtual Machine/370 and follow on systems. Users and applications were allocated virtual machines that gave them virtual compute, storage and even cool things like virtual printers and punches!
So what is different about the technology and the environment now that brings virtualization into the forefront of enabling a new wave of IT automation for today's demanding mobile , big data & analytics workloads?
Earlier mainframe virtualization environments and the more recent emergent UNIX and x86 virtualization solutions were based on proprietary formats and interfaces. This left anyone trying to implement an IT automation solution on top of these systems to write multiple implementations or use plugins and abstraction layers to hide the differences. Today, with OpenStack receiving wide spread acceptance as an open standard for virtual IT resource management, solution developers can develop to one interface.
In my early days as a programmer, I wrote automation programs to create and configure VM/370 virtual resources in support of diverse applications. This included carving out virtual disks and allocating... [Continue Reading]
In my previous blog , I explained how OpenStack with IBM Platform Resource Scheduler (PRS) features enabled can optimize the infrastructure while helping to meet an applications service level objective. In this blog we will explore how PRS works with the OpenStack native Nova scheduler to optimize the environment. While OpenStack has a default scheduler that allows for the placement of VMs it lacks several key capabilities. Firstly, it only makes decisions based on static information in Nova’s SQL database which is incomplete. Secondly, it schedules resource only once during initial placement. There is no ability to optimize the system at run-time to make decisions to replace VMs as the usage of the environment changes and evolves. OpenStack is flexible enough to allow 3rd party schedulers to fit into the framework to provide enhanced functionality. OpenStack exposes scheduler hints that can be passed at VM creation time either through configuration of the flavor or through the nova boot command. PRS fits seamlessly into the OpenStack environment to provide enhanced value.
PRS enhances OpenStack with superior abilities to place VMs at initial deployment, and re-place VMs on new servers, as conditions change during run time. PRS can support global placement policies across an entire cloud/cluster , as well as on individual application/subsystem/tier deployment. Nova uses the concept of host aggregates to... [Continue Reading]
IBM Platform Resource Scheduler (PRS) adds powerful scheduling and resource optimization capabilities to OpenStack environments. It leverages the experience of IBM in handling large-scale datacenters for high-performance computing, analytics and big data and applies that to new environments. In this blog we give a flavor of how PRS can work together with OpenStack in the context of an online e-commerce store. We will explore how OpenStack with PRS features enabled can optimize the infrastructure while helping to meet an applications service level objective.
On-line retailers are faced with the challenge of rapidly developing applications that deliver personalized shopping experiences across a variety of devices including web, mobile , tablets and in the future connected cars. We examine how such a retailer might develop applications for the delivery of physical or virtual goods. This structure would be typical of any type of retail environment that uses the internet to offer its users access to an online product catalog that they can browse and search, and ultimately make a purchase decision. Leveraging the information of users historic purchase patterns and their contacts gleaned from social networks like Facebook or Twitter , the system can make recommendations about which products they might be interested in. The following provides a high level view of the application architecture. It is meant to be... [Continue Reading]
Over a year ago, I embarked on a fantastic journey that led me to meet with some of IBM’s most prestigious, cutting edge customers around the world and talk to them about IBM’s SDN offering and strategy. These customers, no matter what industry, geography or business model all had one important question on their mind: How do I optimize my data center operations to be more like the large “cloud operators” of this world. It had become clear to them that they could no longer operate their infrastructure in silos and most of the inefficiency came from the cross-function groups working together. While each group (i.e. Server, Storage and Networking) respectively did a fair job in its own area, when it came time to deploy a new system, utilizing several elements in multiple areas forced the cost and time to deploy to increase exponentially.
With this premise in mind, we paid close attention to details when developing the new IBM Software Defined Network for Virtual Environment (SDN VE) controller. Our focus with SDN VE is on the system point of view and not the individual parts. For example, in many organizations network overlays (also known as network virtualization) are often deployed and managed by the server virtualization team. Why? Simply because the virtual switch is part of the hypervisor and because it directly connects to the Virtual Machines (VM). While a... [Continue Reading]
IBM’s Software Defined Environment (SDE) is strengthening its portfolio with the introduction of Platform Resource Scheduler , a High Performance Computing (HPC) Scheduler Technology for OpenStack Cloud Management . As a resource-aware and workload-smart solution, IBM Platform Resource Scheduler is a key element of the IBM SDE portfolio , enabling dynamic allocation of infrastructure using intelligent resource scheduling. “After making a major commitment to OpenStack earlier last year, IBM is now beginning to layer on top of OpenStack a suite of management technologies that promise to make it easier to dynamically manage cloud computing at scale as part of a larger software-defined environment (SDE)” clarifies Jay Muelhoefer, director of strategy for IBM Platform Computing in his recent ITBusinessEdge article: IBM Invokes HPC Scheduler Technology to Enhance OpenStack .
IBM Platform Resource Scheduler brings an enterprise-class dynamic resource management that automatically optimizes resource utilization, reduces cost of cloud ownership and delivers higher workload quality through a comprehensive set of intelligent, policy-driven scheduling features. This new Platform Resource Scheduler is an extension to IBM’s SmartCloud Orchestration that leverages technology of IBM Platform Computing to simplify the management... [Continue Reading]