The Back Of The Napkin Cloud
Welcome! My name is John Pritchard and I'm with IBM's Worldwide Cloud team. I've started this blog as a repository for information on IBM's Private Cloud capabilities. My specific focus is on designing compute solutions for complex Aerospace and Defense systems. Each week I'll take a look at issues surrounding cloud computing in the Federal space and review challenges and lessons learned from some of my experiences in the field.
Cloud, Security & Automation--PULSE Comes to Federal
I had the opportunity to speak at IBM's Pulse Comes to Federal event last week and delivered a session on Emerging Trends and Use Cases in Cloud Computing.
Within my Worldwide Cloud Team at IBM we have been seeing four emerging trends in Private Cloud adoption:
(1) Self Service Provisioning - The most common entry point with Development & Test as the prevailing workload
(2) Analytics - Bringing Business Intelligence tools to the data (vice the data to the BI tools) or using event driven decision making with virtualized assets
(3) Application / Platform Service - Shared Services organizations that adopt Cloud as their new delivery model
(4) Cloud Service Delivery Platform - The evolution of the ISP / ASP with Telecos emerging as strong Cloud Service Providers
Though initially identified in the commercial markets I have seen these trends emerging in Federal as well. Most Federal agencies investigating Cloud Computing are starting with Dev/Test Self Service Provisioning. In fact we saw this trend when Virtualization first appeared. Dev/Test is a generally a labor intensive workload with lots of IT assets and thus a great candidate.
The trend I am most excited about is the Cloud Service Provider Model. Here we have vendors standing up Private Clouds on trusted networks and competing to deliver Federal Cloud Services - an example of a Community Cloud by one definition. How long before the trend becomes mainstream? Soon I think. I am already seeing Federal acquisitions using the terms COCO (Contractor-Owned Contractor-Operated), GOCO (Government-Owned Contractor-Operated) and GOGO (Government-Owned Government-Operated).
The Cloud Service Provider Model implements the OSS/BSS Cloud Reference Architecture, generally with a customized front-end, and adds a hybrid Cloud approach for scalability/COOP. The presentation materials can be found here.
johnpritchard 2000005QS8 Tags:  boeing ncoic cloud middleware raytheon finmeccanica lockheed-martin thales 2,614 Visits
NCOIC Lab Interoperability Project
Last week I had the honor of presenting the results of our Cloud Computing findings to members of NATO and the international press. For the past several months I have been working with colleagues from Boeing (US), Finmeccanica (Italy), Thales (France), Raytheon (US) and Lockheed-Martin (US) on a project to demonstrate interoperability best practices. This effort initially involved a demonstration that simulated a joint or "coalition" effort - a scenario where members of different countries act together for a common mission.
For Phase 1 of the project we used the Haiti disaster as an example and showed how systems from globally distributed geographies could could share data on a Common Operational Picture. In this case we had different companies generating data representing their countries' assets (Ships, trucks, etc.) that was combined onto a single GIS map. We demonstrated this capability at the March 2010 NATO Network Enabled Capabilities (NNEC) conference in Rome. Representatives of the European Defense Agency (EDA) and NATO member countries were in attendance at this successful event.
For Phase 2 of the project we chose to investigate the applicability of Cloud Computing to the interoperability demonstration. We expanded the base concept of a single unified Operational Picture to two Operational Pictures that consumed data in different formats.
Common Operational Picture #1 was run from Finmeccanica in Genoa, Italy and consumed data in a HLA format. Common Operation Picture #2 was run from Boeing in Anahein, California and consumed data in a DIS format. For this Phase we introduced some messaging middleware to perform the message translation. All in all a good example of data interoperability.
Cloud Computing Experiments
For Phase 2 we conducted an experiment to migrate the message middleware from physical servers and software running at member labs to virtual machines running in the Cloud. For this effort we connected the IBM Cloud Lab in Dublin, Ireland which was running an instance of IBM Cloudburst. We took the virtualized middleware and made a self-service PaaS capability that could be provisioned on-demand.
Our Industry average time to install and configure physical machines to run the messaging middleware was approximately 3 days. Once the middleware was migrated to the Cloud we were able to select the capability from the Service Catalog and have access in less than 30 mins.
We conducted a live demonstration at the Brussels Plenary for members of EDA, NATO and the international press. We showed two Common Operational Pictures running in Italy and the US, consuming data generated by companies across the globe using middleware hosted in the Cloud. If you think about the Haiti scenario or any other disaster requiring a worldwide response I think you can see the value of first responders having access to capabilities in an On-Demand, self-service model.
Clearly there are details with respect to ubiquitous network access, security, etc. but the fundamental concept of Cloud Computing a a delivery model appears to hold great value.