Have you checked out the features in the new release of the IBM Smart Business Development and Test on the IBM Cloud? Well you should. Version 1.1 provides support for Virtual Private Networks and Virtual Local Area Networks plus new premium support services are now available. I've heard from my tweeps on Twitter that the new release rocks so had to share the news with all of you in our very cool developer community.
Okay so if you want to realize faster application deployment with reduced costs, you have to check out the IBM Cloud. You virtually have no infrastructure to maintain and benefit from pay-as-you-go pricing. And, you can set up more accurate test environments in minutes versus weeks using standardized configurations. Sound irresistible?
So you ask, what does this new release really mean for me as a developer? Well here's a quick summary of what Version 1.1 has to offer:
Security is a top priority, you can now use a VPN to access your machine instances on the IBM Cloud to provide virtual network isolation of your instances. Each VPN service consists of a private virtual LAN (VLAN) in an IBM Cloud Center of your choice plus a VPN gateway for accessing that VLAN. Pretty cool!
In addition, the VPN option allows isolation of your development and test environment on the IBM Cloud on a VLAN that only you can access. Plus your instance is not accessible from the Internet or from other instances unless you have provisioned them to use your private VLAN. Very secure.
New premium support services have been added. On top of the existing tech support, you may also purchase premium levels of support that include around-the-clock telephone support and a web-based ticketing system to submit and review service requests plus remote technical support to assist you in the use of the Cloud web portal, access to services, instance creation, and image management functions within the portal. And you have the ability to add Linux operating support for Linux OS provisioned through the Cloud web portal, including support for virtual machine instances. This is really awesome.
Driven by trends in the consumer internet, cloud computing
is becoming the new way to consume and deliver IT services.As an IT Professional, we need to understand
the different aspects of cloud to seize this opportunity to grow our career and
serve our clients towards a successful adoption of cloud computing.
I’m in the process of learning several aspects of cloud -
emerging trends in cloud solutions, workloads, infrastructure, technologies and
modern services industry.So thought of
this idea to post my learning as a series of blogs which any cloud enthusiast can
benefit to understand cloud computing.When
discussing a topic, instead of reinventing the wheel lets build the content
with links to different articles for further reading that can provide for a
The articles shall cover the entire lifecycle of a cloud
project covering various aspects right from the business requirements,
Architecture /Design, Implementation to Operations. The intention of this blog
is provide the reader a step by step any one or more of the following broad
range of topics
of Cloud Computing
Delivery Models - Infrastructure as a Service, Platform as a Service,
Software as a Service, Business Process as a Service
Deployment Models - Private Clouds, Public Clouds, Hybrid Clouds, Industry
Management - Asset Management, Business Resiliency, Service Management, Capacity
Planning, Charging models and economics,Usage Reporting, Billing &
Metering, Provisioning, Monitoring
We will have something to learn for every week and will
dedicate each week for understanding one of the above topics. So by the end of
16 weeks that we have remaining for the year, we would have learned all the
steps to walk on cloud.The comments to
these posts from all of the members would definitely go a long way in getting
our step right and enriching the content. So C’mon everyone, lets take a walk
in the clouds – step by step…
I just wanted to give everybody a quick update on the Cloud Certification, I took the pre-assessment exam, Test 000-032: Foundations of IBM Cloud Computing Architecture V1. I'm happy to say I received a passing score of 75%. The pre-assessment exam was broken down into three sections
Cloud Computing Concepts and Benefts
Cloud Computing Design Principles
IBM Software Cloud Computing Architecture
Believe it or not I had the most difficulty with section 3 , indentifying IBM software products. Maybe that's something with can discuss during our next study session. Has anybody else taken the pre-assessment exam? I'd like to hear your thoughts about it.
Cloud Security – The top most concern and Opportunity
First of all, wishing all my readers a
very happy and prosperous year 2012 ahead.
Few things happened towards the end
of the year which was significant to me. IBM acquired Q1 Labs to Drive Greater Security Intelligence and created a New Security Division. I also joined this
newly formed IBM Security Systems team last quarter as a solution architect for cloud security. This is a great time to be looking at cloud security. Happy to be on this new role where I can provide solution to customers to handle their cloud security concerns and make it easy for them to adopt cloud and innovate at a faster rate than before.
In my previous
post, we discussed security as the top most concern why customers and
enterprises are not adopting cloud.As
part of year’s posts, I plan to discuss the various security issues and aspects
of cloud computing.
We will explore to understand what are
the unique challenges with Cloud Security and discuss what aspects is important
for each customer
adoption pattern that we have seen.
We will also learn how the IBM Security
Framework can be used to address the various security challenges namely
forward to your comments and inputs in this journey of understanding the
security requirements for cloud and how we can overcome this major challenge to
cloud adoption using the World’s Most Comprehensive Security Portfolio – IBM
Security Systems. I’ll
try and elaborate the IBM Point of View on cloud security and discuss the architectural
model to address the security requirements for cloud. Stay tuned and keep those comments and inputs coming.
While I’m writing this blog, the Ministers of Tamil Nadu and
Kerala are having a meeting
with Prime Minister to discuss the contentious issue of Mullaperiayar at length.
For those who don’t know about this issue, this is about the Mullaperiayar Dam in
Mullaperiyar Dam is a masonry gravity dam over River Periyar and operated
bythe Government of Tamil Nadu based on
a 999-year lease agreement. The catchment areas and river basin of River
Periyar downstream include five Districts of Central Kerala, namely Idukki,
Kottayam, Ernakulam, Alappuzha and Trissur with a total population of around
This dam is at the centre stage again in the wake of reports that
the dam is weakening due to increase in incidents of tremor in Idduki district
in Kerala. Ministers from Kerala are seeking Central Government intervention in
ensuring the safety of the dam. At the same time, Tamil Nadu is insisting on
increasing the water level in the reservoir for enhancing water supply to the
state. While Tamil Nadu wants to increase the water-level in the reservoir,
Kerala has been insisting that it be reduced from the current 136 feet to 120
Currently I don’t think we have clear metrics on the exact usage
of water by each state, what is right level of water to be retained by the dam,
what are the risks etc. We have been relying on data that we have from the
However you look at it -- whether too much or not enough,
the world needs a smarter way to think about water. We need to look at the
subject holistically with all the other considerations as well. We use water
for more than drinking. We need to make an inventory of how much water we get
and how is it used – of industries, irrigation, etc.
This is where I think we need smarter ways to manage the water in the best possible way that addresses both states
Smarter Water Management can help us think in a smarter way about water. For
instance IBM is helping
the Beacon Institute to do source-to-sea real-time monitoring network for New York’s Hudson
and St. Lawrence Rivers as well as report on conditions and threats in real
time. There are many other case studies across the globe on IBM Smarter Water
Those interested in the problem and the possible solutions should
definitely read IBM’s broader outlook on Water Management as covered in the Global Innovation Outlook.
for Tomorrow is another interesting partnership between IBM and The Nature
Conservancy. IBM is providing a state-of-the-art support system for a free,
online application that will provide easy access to data and computer models to
help watershed managers assess how land use affects water quality.
Though it's a worldwide entity, water is treated as a regional
issue. I think we should try putting technology to use to solve our water problems.
The solution should be more instrumented, interconnected and intelligent system
that can not only take into consideration the realtime monitoring of the river
but also include early warning systems to notify risks related to earth quakes
etc. IBM’s Strategic
Water Management Solutions include offerings to help governments, water
utilities, and companies monitor and manage water more effectively. The IBM
Strategic Water Information Management (SWIM) solutions platform is both an
information architecture and an intelligent infrastructure that enables
continuous automated sensing, monitoring, and decision support for water
you might be wondering what has this to do with Cloud and why is this post on
cloud computing Central. For these solutions and platforms to be successful it
is highly important that we have energy efficient high-performance computing
platforms and complex sensor, metering, and actuator networks. Such platform
needs and flexible choices of having the solution on-premise as well as
leverage different delivery models can only be supported through a cloud.
I think we should just leverage these solutions on the cloud to
solve this issue and keep all the states and its people happy :-).
As we discussed in the previous post, it is important that the all the
processes work together to bring successful automation in the cloud management
platform.A process workflow automation
engine is what makes this possible. In this chapter we will discuss more about Tivoli process automation
engine that’s form the base for IBM process automation in the cloud space.
process automation engine provides a user interface, configuration services, workflows and the common data system needed
for IBM Service Management products and other services. As we already know IBM
Service Management (ISM) is a comprehensive and integrated approach for
Service Management, integrating technology, information, processes, and people
to deliver service excellence and operational efficiency and effectiveness for
traditional enterprises, service providers, and mid-size companies. Tivoli process automation engine, previously known as Tivoli base services, provides
the base infrastructure for applications like Tivoli Maximo Asset Management,
Change and Configuration Manager Database (CCMDB), Tivoli Service Request
Manager (SRM), Tivoli Asset Management for IT (TAMIT), Tivoli Proivisioning
Manager as well as Tivoli Service Automation Manager. Any product that has the Tivoli process automation engine as its foundation can be
installed with any other product that has the Tivoli process automation engine.
Management that integrates and automates IT management processes
Management that integrates people, processes, information and technology
for real business results
Management to automate tasks to address application or business service
operational management challenges
Through having a common process automation engine, the
we can successfully link Operational and Business services with Infrastructure
through a single (J2EE) platform. We can also leverage current investments
through linking this engine with existing process automation technologies and
products. So by building a unified platform
to automate processes, we have taken data integration to the next level where sharing
data between applications has never been easier.This integrated process automation platform can
support the repeatable IT functions like Incident Management, Problem
Management, Change Management, Configuration Management all the way through to
Release Management. All of these processes tie into the CMDB where they share
consistent data via bidirectional integration. The platform supports best
practices such as ITIL and other Industry best practices. This facilitates an automated approach across
the IT management lifecycle. It's also forms the basis for automating
repetitive tasks that can be handled by the system instead of requiring human (costly)
intervention. TPAE through the adapters provide data federation from multiple
sources that you already have and translating the information into usable data
that can be leveraged by internal process and workflow.
Figure 1 Tivoli
process automation integrated portfolio
A cloud is not a cloud if it is not elastic. The elastic
property of the cloud to expand and shrink based on demand is possible only
with a proper capacity planning. I feel the most difficult exercise to do while
making a cloud solution is capacity planning for your cloud.By this, I mean you have to size
managed environment as well as
Most of the engagements that I’ve walked into might have
some capacity or infrastructure that they want us to leverage and use it in the
cloud.So the comparison becomes
difficult if you don’t have a standard measuring unit for your infrastructure –
for instance how do you know a Quadcore
on an intel platform compares to power7 core. So I found a good explanation in
this guide, in this interesting article –
The answer to the difficult question was to use something
called the cloud CPU unit which is
nothing but the computing power equal to the processing power on a one
gigahertz CPU. When a user requests two CPUs, for example, they will get the
processing power of two 1 GHz CPUs. This means that a system with two CPUs,
each with four cores, running at 3 GHz will have the equivalent of 24 CPU units
(2CPUs x 4Cores x 3GHz = 24CPU Units).
The other dimension of the complexity is to determine the
resource needs and do the trends and forecasting. I typically collect the
projections from the clients and then put down some critical assumptions to
determine how big my cloud should be. Some critical questions that I typically
many concurrent users and peak users and what percentage of these users
needs to be covered?
type of workloads they typically run – development, test ?
image attributes – mem, cpu, storage etc
infrastructure planner for cloud made life easy for me that had a user
friendly interface to take me through these steps and arrive at a sizing for
the managed environment. Once we know
the managed environment, we can make
the sizing of the management platform. The details of how to plan the managed
environment, I’ll discuss in my next post.
I’ll be interested in putting together the top 10 parameters
that are critical for sizing the cloud managed and management environment. Look forward to your comments.
of the important things to decide when you discuss Cloud Service Strategy and
Design is the consideration for a Reference Architecture. This is something that is useful to align to
as it represents the blueprint for your cloud and make the implementation risk
free.The Cloud Computing Reference
Architecture (RA) is intended to be used as a blueprint / guide for
architecting cloud implementations, driven by functional and non-functional
requirements of the respective cloud implementation. The RA defines the basic
building blocks - architectural elements and their relationships which make up
the cloud. The RA also defines the basic principles which are fundamental for
delivering & managing cloud services.
architecture is more than just a collection of technologies and products. They
consist of several architectural models and are much like a city plan.The RA defines how your cloud platform should
be constructed so that it can satisfy not you’re your current demands and but
also be extensible to support the future needs of a diverse user population. So
this blueprint should be responsive to changing business and technology
requirements and adaptable to emerging technologies. Existing “legacy” products and
technologies as well as new cloud technologies can be mapped on the AOD to show
integration points amongst the new cloud technologies and integration points
between the cloud technologies and already existing ones. By delivering best practices in a standardized,
methodical way, an RA ensures consistency and quality across development and
IBM Cloud Computing RA is structured in a modular fashion with each functional capability
(architectural elements), the user roles (that we discussed in Chapter 12) and
their corresponding interactions. The IBM CCRA is created based on several
cloud engagements and incorporates all the good practices and methods
implemented across these projects. So for an end user adopting these good
practices the risk and cost of implementation of their cloud will be low. The
CC RA is built on the ELEG ( Efficiency, Lightweightness, Economies-of-scale,
of the principles that I want to highlight here is the Genericity Principle –
That’s the capability to define and manage generically along the Lifecycle of
Cloud Services: Be generic across I/P/S/BPaaS & provide ‘exploitation’
mechanism to support various cloud services using a shared, common management
platform (“Genericity”).As we know or
discussed in the cloud delivery and deployment models (Chapter 3) there can
many models for deployment and delivery of a Cloud Services. As we know Cloud
Service can represent any type of (IT) capability which is provided by the Cloud
Service Provider to Cloud Service Consumers - Infrastructure, Platform,
Software or Business Process Services. The beauty and significance of the IBM
Cloud Computing Reference Architecture is that it can cater to any of these
service delivery and deployment models. So if you are building your private
cloud or public cloud or using cloud to deliver IAAS, PAAS or SAAS the RA
remains the same and handle all of these combinations. We have seen the
capabilities that we need (Chapter 6) for implementing a common cloud
has recently submitted
Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) to the Cloud
Architecture Project of the Open Group,
a document based on “real-world input from many cloud implementations across
IBM” meant to provide guidelines for creating a cloud environment. Check
out this link
which has the interview with Heather Kreger, one of the authors of Cloud Computing
Reference Architecture as well as the details of the components that make up
the topic there is also an article that I found on syscon cloud computing
journal which is comparing the Reference Architecture of the Big Three (
IBM, HP and Microsoft)which is an
before we get into the details of the Service Implementation / Transition phase
it is important that we understand the bigger picture. The word document IBM
Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) provides a great
description of this bigger picture and going into the details as required. The
architectural principles define the fundamental principles which need to be
followed when realizing a cloud across all implementation stages (architecture,
design, and implementation). This is a must read for all - development teams
implementing the cloud delivery & management capabilities as well as
practitioners implementing private clouds for customers.
Chapter 6 - Multiple Entry Points to Deploy and manage Cloud Based Services
Cloud Service Management capabilities are needed to
enable visibility, control and automation of cloud services. IBM provides the
following open standards based integrated capabilities to implement service
management for the cloud.
hardware, software and services optimized for cloud
If you are looking for A la carte software offering/solution for maximum flexibility, you
start with IBM Tivoli Service Automation
Manager.This flexible solution
supports user driven service requests and automated resource deployment. The key capabilities
Self service User Interface for
Service Requests for improved responsiveness and efficiency
Workflow support to manage the
process for approval of usage
Provisioning – Automatesprovisioning of resources / IT resource
deploymentfor efficient operations
and to address fluctuating business requirements
with existing hardware to
leverage available resources and previous investment
Delivery Manager (ISDM) is a new offering which is pre-configured
management solution optimized for managing virtual environments and cloud
deployments.Like Tivoli Service
Automation Manager this again is also“software only” offering.In addition to the IBM Tivoli Service
Automation Manager features ISDM includes the additional capabilities
Pre-integrated solution, delivered
as virtual images for faster installation and time to value.
Monitoring to provide Visibility
of Performance of Virtual Machines
Usage and Accounting tracking for
Server ready for High Availability
Energy Management for tracking and
optimizing operational costs
IBM CloudBurst compared
to Tivoli Service Automation Manager and ISDM not only has the software
solution optimized for cloud but also ships the integrated hardware. In
addition to what was provided by its sibling offerings, IBM Cloudburst provides
the following capabilities.
Self-contained solution (managed from
and to environment) to accelerate cloud deployments
Pre-integrated solution bundled with
HW, SW, storage, network and QuickStart services for fastest time to
Thus the three offerings are designed for specific
purposes and selecting the right solution is based on the requirement. You can pick
from the following list and depending on what all you need, it is easy to
select the solution that meets those requirements.
Automation and Provisioning
Storage, Network Hardware
Quite often people are interested to know about IBM WebSphere
CloudBurst and how it is different from the three discussed above. While IBM
CloudBurst and WebSphere CloudBurst are both appliances that accelerate
time-to-value and reduce costs they are designed for two distinct purposes.
CloudBurst is a general-purpose cloud solution. It enables users to
virtualize, deploy, manage, and monitor highly heterogeneous workloads in
their private cloud. IBM CloudBurst is a pre-packaged cloud with
integrated blades, storage, network switches, and software management
CloudBurst is purpose-built to enable users to create, deploy, and manage
private clouds created from IBM Hypervisor Edition images and patterns. IBM
WebSphere CloudBurst delivers specialized WebSphere knowledge in the form
of pre-configured, optimized WebSphere patterns and images. WebSphere
CloudBurst is a cloud management device: 1U appliance that manages a private or on-premise
cloud. It requires supporting infrastructure (hypervisors, storage, and
networking) and virtual images.
Their integration augments the value of each offering
with IBM CloudBurst enabling end-to-end service request governance for
WebSphere CloudBurst provisioning and users still able to leverage a single portal
for cloud service requests forrapid and
optimized provisioning of virtualized WebSphere systems
IT Service Management is the integrated management of the people,
processes, technologies and information required to ensure the cost and quality
of IT services valued by the customer. IT Service Management (ITSM) is the
design, creation, implementation, execution and ongoing management of the IT
environment and services that meet the needs of the business and consumers.It includes:
·Management of IT as a business
·Design, implementation, and deployment of IT services
·Delivery of services to IT customers at
agreed-to levels of service and price
·Optimization of services through Service
Lifecycle Management & Continual Service Improvement
Service Management is at the
heart of the Cloud. Research shows on an average, 81% of cloud payback is
driven by labor savings enabled by service management. As discussed in the
previous chapter, Cloud Computing provides IT departments of enterprises an opportunity
to move towards a service driven management model. The same engineering
discipline that rationalized factory floors and production can be applied to IT
services. Cloud computing provides technical foundations enabling reengineering
of IT service model.But the goals for
service management remains the same the way it is applied for traditional IT.
The key objective of the service management system is to provide the
visibility, control and automation needed for efficient cloud delivery in both
public and private implementations.
The ability to see everything that’s going on
across the infrastructure. This
includes the visibility to services; enable end users to request
services through a self enablement portal
The ability to keep the infrastructure in its
desired state by enforcing policies.Control enables the
fulfillment of user requests based on best practices for request types
& conformance to organizational processes
The ability to manage huge and growing
infrastructures while controlling cost and quality. Automation of service delivery
includes automating user requests and operational tasks to improve
efficiency and effectiveness
ITIL is one of the
foundations for service management best practices.A key element of ITIL is the service lifecycle
and the need for best practice processes throughout the life of a service.ITIL Service Lifecycle Modules are:
Service Improvement (CSI)
Cloud services also have a lifecycle that maps to the
ITIL service management lifecycle. In the Cloud context, Service Management
controls an efficient implementation of new services, integration with the
existing portfolio and lifecycle management of standardized IT services. For
instance Cloud Computing will become a relevant topic in your Service Strategy.You need to see how to leverage integration of
Cloud and traditional IT services during the Service Design. For Service
Operation you need a automated way to deploy your cloud services – an automated
provisioning and image management. For Continual Service Improvement (CSI) it
requires the capability for managing, monitoring, securing and metering your
When discussing IT Service Lifecycle management it is
good to discuss the standardization step as well. Standardization helps improve overall operations. The more you can
standardize the more you can reduce operating expense such as labor and
downtime – which is the fastest growing portion of IT expenditures. Tivoli
Service Automation manager takes care of Standardization and best practices in
all the steps of Service Lifecycle with the capabilities discussed below.
Design and Transition
a Service Template Definition
to build service and management plans for Service
Service Offering Creation
& Registration – a way to define Service based on Template and register
the same in the Catalog.
Service Offering Subscription
& Instantiation – Provides a way users can select the service, specify
parameters and SLAs.
The ability to automatically
instantiate the Service.
for autonomic execution of management plans leveraging Automation and
DestroyService and free up resources based on Service
Instance Termination requests
These capabilities of providing visibility, control and
automation across the business and IT infrastructure results in the following key
Integrated processes across the business
more reliable service delivery
efficiency and staff productivity
operational risk and exposure
We will discuss in detail how you could use IBM
Cloudburst, IBM Service Delivery Manager and Tivoli Service Automation Manager
for each of these steps in the lifecycle. If you are developer, the following
chapters will help you understand the technologies and skills needed to do the
services design, automation and management.
For the enterprises, the most attractive factor of cloud is its flexible sourcing options and the
choices of deployment. And again the different deployment and delivery models can co-exist and it is
possible to integrate with traditional IT systems and with other clouds.
Cloud Delivery Models
The key delivery models for cloud are discussed below.
Cloud refers to IT
capabilities are provided “as a service,” over an intranet, within the enterprise
and behind the firewall. Privately owned and managed. The access limited to
client and its partner network. The Private cloud drives efficiency,
standardization and best practices while retaining greater customization and
control within the organization. In a private cloud environment, all resources
are local and dedicated.All cloud
management is local.
Figure 1 Private Cloud
Public Cloud refers to IT activities / functions are
provided “as a service,” over the Internet Service provider owned and managed.
In public cloud, access is by subscription.
The public cloud delivers select set of standardized
business process, application and/or infrastructure services on a flexible
price per use basis.Multiple tenancy is a key characteristic of public
Figure 2 Public Cloud
Hybrid cloud is a combination of characteristics of
both public and private cloud where internal and external service delivery methods
are integrated. For example in the case of an Off-Premise Private Cloud, resources
are dedicated, but off-premise.Enterprise administrator
can manage the service catalog and policies.Cloud provider operates and manages the cloud infrastructure and
Figure 3 Off-Premise Private Cloud
Community cloud – This is the model where the cloud
infrastructure is shared by several organizations and supports a specific
community that has shared concerns (e.g., mission, security requirements,
policy, and compliance considerations). It may be managed by the organizations
or a third party and may exist on premise or off premise.
Public vs. Private
Overall private clouds have higher levels of consideration
than public clouds with most of the enterprises but there are various other
models that are emerging.
Figure 4 Cloud Delivery Models
We need to
balance the business benefits of increased speed and lower cost with public
cloud offerings versus the security and ownership of infrastructure and service
management considerations while choosing between a public and private cloud
offering for a capability. The governance model, resiliency, level and source
of support, architectural & management control, compliance, customization /
specialization etc are other considerations.
Public and Private Clouds are preferred for different
workloads. Many enterprises still prefer to host their traditional applications
out of their private cloud. The top private workloads include
Data mining, text mining, or
Data warehouses or data marts
Business continuity and
As and when
a workload becomes more standard and the SLAs are well established, the same
service becomes easy to consume over a public cloud.This is similar to how you can access well
defined banking functions through ATMs. Only when you need some special
services you go to your bank these days.Similarly top public workloads include
Service help desk
Infrastructure for training and
WAN capacity, VOIP
Test environment infrastructure
Data Centre network capacity
Cloud Deployment Models
All the computing related functions that clouds provide are
accessed through a service catalog and delivered as integrated services. The
different layers of IT-as-a-Service are referred to as the Cloud Deployment
Models. More details of these definitions can be found at the NIST website which
is source for some of the text below.
Figure 5 Cloud Deployment Models
Infrastructure as a Service (IaaS) is the service
delivery model where customers use processing (server), storage, networks and other
computing resources/ data center functionality.Iaas has the ability to rapidly and elastically provision and control resources.
In this model customers can deploy and run software and services without the
need to manage or control the underlying resources. The IBM Research Compute
Cloud (RC2) is an example for this model. Smart
Business Desktop on the IBM Cloud is another example for IaaS that enables
desktop virtualization with a subscription service with no upfront fees or
capital expense. Consider reading about IBM
Cloudburst if you are building your own IaaS platform.
Platform as a Service (PaaS) is the delivery model
where customers can use programming languages, tools and platforms to develop
and deploy applications on multi-tenant shared infrastructure with ability the to
control deployed applications and environments. All of these again can be done without
the need to manage or control the underlying resources. IBM BPM BlueWorks provides
tools to build your own business process. WebSphere
Cloudburst is also something for you to look at if you building a PaaS
Software as a Service (SaaS) is the popular model
where customers use applications (Eg, CRM, ERP, E-mail) from multiple client
devices through a Web browser on multi-tenant and shared infrastructure without
the need to manage or control the underlying resources. An example of this
model is IBM lotuslive.
Business Process as a Service (BPaaS) is an emerging
model where customers can consume business outcomes (Eg, payroll processing,
HR) by accessing business services via Web-centric interfaces on multi-tenant
and shared infrastructures.Smart Business Expense Reporting on the IBM
Cloud is one of the offerings in this category.
As part of the first two parts of this series we have tried
to define the term “cloud computing”.Having understood what it is, let us now try to look at how and cloud
computing is gaining importance now.
As the world is becoming more interconnected, infrastructure
needs to become dynamic to bring together business and IT. Growth of
instrumentation, interconnection and intelligence in the world is driving the
emergence of IT and business services and the requirement for service
management systems. To create such a
dynamic infrastructure, the customers (businesses) are looking for following
have to worry about the full IT capacity they need at peak time.
only for what they actually use. They do not have to buy servers or
capacity for maximum use. i.e. Move to a reduced Capex (Capital Expense) model
with leveraging the economies of Opex (operating expense) for IT
allocation and de-allocation of resources or semi-automatically on demand
If you research on how the business can address or acquire
the above capabilities, cloud computing seems to be holding to the key answers
to the above considerations. An effective Cloud Computing deployment is built
on a Dynamic Infrastructure and is highly optimized to achieve more with less leveraging
virtualization, standardization and automation to free up budget for new
Computing is a new IT consumption and delivery model for businesses that makes
the above capabilities a reality.
AConsumption model: new user
experience and a business model
Standardized SERVICES offerings
Ease of access
Computing and Delivery model:
Integrated Service Management
Progression toward transformation starts with optimizing
existing assets/processes and leverages best in class technology at transitions.
Each step balances improvements in efficiency and effectiveness and can be measured
by business returns. However, an organization can move to cloud systematically
taking one step at a time, or they can move right to a cloud deployment if it
aligns best with their strategic vision for the business.
Readying the infrastructure requires the implementation of a
Dynamic Infrastructure:consolidate your
servers and storage, implement virtualization technologies to increase
utilization, standardizing your processes for operational efficiency, automating
procedures for a more flexible delivery and enabling clients for
self-service.Then you identify common
workloads and set up shared resources, and finally, to achieve a true
cloud-enabled environment, clients must be able to provision the workloads in a
to cloud consumption and delivery model is like a big transformation effort. So
before taking this long journey, it is important to understand the typical use
cases, workloads that you can move to cloud and the associated ROI.
Cloud Business Use Cases
One of the
earliest groups to take a step towards identifying some of these use cases is
Computing Use Cases Workgroup on google groups. This collaborative
effort of cloud consumers and cloud vendors has put out a white paper that
discusses some of the basic definitions. The paper further discusses the
various Use Case Scenarios from a Delivery and Deployment model perspective. The
white paper is in its fifth iteration were the group members are now discussing
what and how about “moving to the cloud”. The current version of the paper can
be found here.
effort on the subject on use cases from a business perspective is “Strengthening your
Business Case for Using Cloud” whitepaper from the open group.I was also one of the key contributors to this
effort. This White Paper incorporates a unique collection of Cloud business use
cases, findings, and conclusions that can help executives and business process
owners make the appropriate Cloud investment decisions. By describing
real-world granular business problems, requirements, and analysis of the value
and business implications of Cloud computing, reading this paper will equip you
with the necessary business insights to justify your path for using Cloud.
consideration is that the adoption of cloud computing will be workload
The delivery model (public, private or hybrid) selection
depends on the workload. The research studies by IBM indicate that the
different types of workloads that could be delivered internal with a private
cloud or on a fully shared environment on a public cloud are the following.
Database- and application-oriented workloads emerge as most
appropriate for private workloads where as Infrastructure workloads emerge as most
appropriate for the public cloud.
Most customers want to start with something under their
control and behind their firewalls.So
the tremendous interest today among businesses is for private clouds – in both
Large Enterprises and the Mid-market.There is also great interest in public cloud
services – especially with smaller clients for infrastructure services. As businesses
become more comfortable moving workloads to public clouds more domain applications
will become available on the cloud. This will also result in a proliferation of
hybrid clouds as businesses integrate their private cloud environments with
public cloud services.
Benefits of Cloud Computing
The analysis of these use cases as well as what is discussed
in the open group whitepaper, point to the following benefits of using Cloud
to dynamically source and consume IT services
(infrastructure, platforms, software, and business services) on a demand
use basis – an instantly secure and managed service provisioning process
to move/abstract the service complexity off-premise to provide more
efficient availability, resilience, and security patching
agility, ability to adjust to business requirements and market
forces on demand
risk management through improved business resiliency
pricing model, eliminating cost of excess capacity
and flexible service for users, enabling self-service
requests and delivering services more rapidly, with fewer errors, and
based on requested qualities of service or SLAs
time to marketand acceleration of innovation projects
costs, both capital and operational expenditures
up skilled resources to focus on high value work and innovation
Significantly improve energy
efficiency and reduce idle time
Cloud Deployment and Delivery Models
There are multiple delivery and deployment models that cloud
computing supports to deliver the promised capabilities. This choice and
flexibility of having different deployment delivery models is the key to
success of Cloud Computing platform. The cloud flexible delivery models include
Standard Cloud service types are emerging and guiding the IT
Industry development. The different deployment models are
as a Service (IaaS)
as a Service (PaaS)
as a Service (SaaS)
Process as a Service (BPaaS)
multiple deployment and delivery models can co-exist and it is possible to integrate
with traditional IT systems and with other clouds.We will discuss them in detail in the
Let’s start the first module with trying to understand and
define the term Cloud Computing in its details.It is comprised of two words – Cloud and Computing.So simply put it is computing that you can
offer on the cloud.What’s the Cloud
referred here? The term "cloud" is used as a metaphor for the
Internet, based on the cloud drawing used in the past to represent the network.The computing could be any goal-oriented
activity requiring, benefiting from the usage of Information Technology that
includes hardware and software systems used for a wide range of purposes;
processing, structuring, and managing various kinds of information;
There are several definitions that you can find on the web
for cloud computing.
National Institute of Standards and Technology (NIST),
Information Technology Laboratory has been promoting the effective and secure
use of cloud computing technology within government and industry by providing
technical guidance and promoting standards.
Definition - Cloud computing is a pay-per-use model for enabling
available, convenient, on-demand network access to a shared pool of
configurable computing resources (e.g., networks, servers, storage,
applications, services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction.
- Cloud computing is Internet-based computing, whereby shared resources,
software, and information are provided to computers and other devices on
demand, like the electricity grid.
Internet-based computing was always available. So what’s
different now?The different is Cloud
computing is a paradigm shift. Cloud computing is a new consumption and
delivery model inspired by consumer internet services. Cloud computing is still
an evolving paradigm. But in general most of the companies involved with cloud
have agreed on certain general characteristics or essentials that qualify any
internet-based computing to be referred to as a cloud. They are the following
On-demand self-service - A consumer can unilaterally
provision computing capabilities, such as server time and network storage, as needed
without requiring human interaction with each service’s provider.
Ubiquitous network access - Capabilities are
available over the network and accessed through standard mechanisms that
promote use by heterogeneous thin or thick client platforms (e.g., mobile
phones, laptops, and PDAs).
Location independent resource pooling - The
provider’s computing resources are pooled to serve all consumers using a
multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand. The customer generally
has no control or knowledge over the exact location of the provided resources.
Examples of resources include storage, processing, memory, network bandwidth,
and virtual machines.
Rapid elasticity - Capabilities can be rapidly and
elastically provisioned to quickly scale up and rapidly released to quickly
scale down. To the consumer, the capabilities available for rent often appear
to be infinite and can be purchased in any quantity at any time.
Pay per use - Capabilities are charged using a
metered, fee-for-service, or advertising based billing model to promote
optimization of resource use. Examples are measuring the storage, bandwidth,
and computing resources consumed and charging for the number of active user
accounts per month. Clouds within an organization accrue cost between business
units and may or may not use actual currency.
The intent of this blog is not to duplicate the content from
other web sites into this article. But provide a means to navigate through a
variety of resources that are available and take a structured approach to
understanding the term.Once we have
understood this basic definition, let’s look at other resources for further
·Is Cloud Computing same as
·Where can I learn more about Cloud
·What types of application can run
in the Cloud?
Computing Primer -Part 1 – This
white paper recommended as one of the resources for the Cloud Computing
Certification discusses the definition in detail. Beyond the definition, it
discusses the cloud computing context and how is it different from current
hosted services. Virtualization plays a key role for meeting some of the
characteristics of cloud like Elasticity and Scalability, Workload Migration
and Resiliency. This article discusses Virtualization and its effect on cloud
is computing. The article further tries to burst some common myths about cloud
computing should satisfy all the requirements specified: scalability, on
demand, pay per use, resilience, multitenancy, and workload migration.
Cloud computing is useful only if you are outsourcing your IT functions to
an external service provider.
computing requires virtualization
computing requires you to expose your data to the outside world.
networks are essential to cloud computing
To get an overview best is to start with these excellent 3 to 4 minute videos onintroduction to the basics of cloud computing
from common craft and rPath – Cloud Computing in
Plain English and Cloud
Computing Plain and Simple. Cloud
Computing Explained is another simple video that explains Cloud Computing
in a way that everyone can understand! You can find many videos on Youtube if you search for cloud
computing. But the best that I liked is this one where a Dad is explaining Cow
computing – I mean Cloud Computing to his daughter. Check it out.
share is another good place where I found there are some very interesting
presentations on cloud.
We had our first meeting of the IBM Cloud Certification Study Group yesterday.The objective of the study group is to pass the IBM Certified Solution Advisor Cloud Computing Architecture V1 certification exam I wanted to thank all the group members who attended and shared their ideas on how to study for the certification exam. We had groups members participate from all over the globe, from Sweden, India, North America and Australia. If you couldn't make it ,have no worries we'll arrange another meeting in a couple of weeks time. Please feel free to join us.
During our meeting we decided on a strategy of " Divide and Conquer" in our approach to studying for the exam. By this I mean, take advantage of each individuals strengths and share it with the group. One group member might be well versed on Cloud Security and another might be proficient on SaaS. The idea is to get together and share our knowledge.
During our meeting we covered the following:
Key areas of competency for the Cloud Solution Advisor certification
We've recorded our first session and if you'd like to watch the replay, it can be viewed here. PDF presentation files of the meeting are located here. We've also posted a couple of activities to complete prior to our second meeting. Those are located under the activities section of the group. If you'd like to be notified when we add additional activities let me know and I'll add you to the list.
I'm really looking forward to working with the study group and ultimately becoming a IBM Certified Solution Advisor too.
Implementation details about the Microservice can be studied in the source code by loading the project into your preferred Java IDE such as Eclipse.
Before the Microservice can be run inside Docker, the Docker technology must be installed on your local machine. You can follow step-by-step Docker installation procedure at: Docker Installation
Once Docker is installed correctly, you can test your installation using the following command:
docker run hello-world
Create a Microservice Docker Image
In the Docker ecosystem, there are two main concepts to understand.
Docker container: A Docker container is a lightweight instance of a Linux based OS running on top of your host Operating System
Docker image: Docker image represents your Application software + entire environment running inside a container
For the above microservice, the container loads the microservice image, and as part of this image it not only loads the Application Code for the microservice, but also the Java 8 environment it needs to run the microservice.
But, before you can load the microservice into Docker, you need to create a Docker image for that software. The steps to create the image are as follows:
Create a directory next to your microservice project
Copy microservice artifacts to the build directory
CMD java -jar hello-microservice-1.0-SNAPSHOT.jar server hello-microservice.yaml
From the Docker session, goto the hello-microservice-build directory and issue the command
docker build -t hello-microservice-local .
The Docker build process uses a file named Dockerfile to get its instructions about what to do when building an image. In this particular microservice, the Dockerfile instructs the Docker system to download an image called 'java:8'. This is the core infrastructure needed to run the microservice. Next it adds the microservice jar and configuration to the image. And later, it exposes the ports 9000 and 9001 to service the requests.
docker build -t hello-microservice-local . (is the command that processes the Dockerfile and produces the hello-microservice-local image)
Note: make sure this command is issued from the Docker session and not just any command line session.
Once this Java Microservice Docker image is created, it must be run inside a Docker container using the following command:
docker run -p 9000:9000 --name hello-microservice-local -t hello-microservice-local
With the recent exploration of cloud computing technologies, organizations are using cloud service models like infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) along with cloud deployment models (public, private and hybrid) to deploy their applications.
There is a concept in the cloud world that is based on application characteristics: the concept of cloud-enabled and cloud-centric applications. In this blog post, Dan Boulia provides a concise explanation about the concept.
You can say that a cloud-enabled application is an application that was moved to cloud, but it was originally developed for deployment in a traditional data center. Some characteristics of the application had to be changed or customized for the cloud. On the other hand, a cloud-centric application (also known as cloud-native and cloud-ready) is an application that was developed with the cloud principles of multi-tenancy, elastic scaling and easy integration and administration in its design.
When developing an application that will be deployed in the cloud, you must keep the cloud principles in mind. They should be taken into account as part of the application. So we come to the first point: Is it better to work within an existing application or to completely redesign it? There is no exact answer because it depends. You have to evaluate the level of effort (labor, time and cost) to transform the application into cloud-enabled versus the effort to completely redesign it to a cloud-centric application.
The second point is: Will my cloud-enabled application work better than a new cloud-centric application? Here I would say no. It’s rare to find an existing traditional application that was developed with any of the cloud principles in mind. It may be possible to construct the same feel (for the user) as a cloud-centric application, but it will not function the same way internally.
Changing an existing application could be easier since you already have the skills and tools in the organization and you won’t need to learn any new technology. However, while it may be easier to change the application, in the long term it will be harder to maintain. New technologies (social media, mobile, sensors) continue to appear and it is becoming more important to integrate them. Doing this will require additional and continuous effort and may exponentially increase development and supporting costs.
Now comes the third point: What can you use to help expedite the move or redevelopment of an existing application to a cloud-centric model? Many cloud companies have development tools that can help an organization on this path. For instance, IBM has recently announced IBM Bluemix, a development platform to create cloud-centric applications. Shamim Hossain explains the capabilities in more detail in his blog post. Another option is to use IBM PureApplication System to expedite the development.
I discussed some points here that I hope can provide a better understand about an important concept in cloud computing and how to address it. Let me know your thoughts on it! Follow me at Twitter @varga_sergio to talk more about it
Come to the first Cloud Foundry Meetup in the Waltham area this coming Wednesday, December 11th!
This meetup is your opportunity to learn more about Cloud Foundry and meet people excited about the technology.
On the agenda is an Introduction to Cloud Foundry: the technology and the community by Chris Ferris of IBM.
This will be followed by a talk by Renat Khasanshyn of Altoros on Implementing Cloud Foundry 2.0.
More information at: //bit.ly/1azS5PX
Managing software and product lifecycle integration has always been a challenge and with the rate of the new demands on the enterprise the challenges are increasing. Leaders from different standards organizations and industry will lead interactive discussions on the importance of open technologies to help enterprises manage the lifecycle activities within their environments. Learn about the direction lifecycle integration is taking as a result of the inclusion of open standards and the importance of this work to you. You will also hear how you can bring forward your requirements and influence the supporting work activities.
The Open Lifecycle Summit will feature short lightning talks and panel discussions with industry leaders such as OASIS CEO Laurent Liscia, Tasktop CEO Mik Kirsten, Opscode VP of Solutions George Moberly, and IBM Fellows Michael Michael Kaczmarski and Kevin Stoodley, and IBM VP of Standards and IBM Cloud Labs, Dr. Angel Diaz.
The Summit is free to attend for all those attending IBM Innovate. Join us for an exciting session and refreshments to start your attendance at Innovate 2013. For more information and to RSVP visit http://ibm.co/16jTusU
The challenges of
virtualized environments are driving the shift to greater integration of
service management capabilities such as image and patch management, high-scale
provisioning, monitoring, storage and security. Join us for this webcast to learn how
organizations can realize the full benefits of virtualization to reduce
management costs, decrease deployment time, increase visibility into
performance and maximize utilization.
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
As a result of feedback from SmartCloud Enterprise customers
and business partners, IBM is rolling out new enhancements this week.*
In addition to the availability of IBM SmartCloud
Application Services, IBM’s platform-as-a-service offering, new and enhanced
capabilities for IBM SmartCloud Enterprise include:
Platinum M2 VM sizes, now generally available
Alternate Windows Instance Capture, now generally available
Windows Import/Copy pre-release, available by request
Windows 2012 pre-release, available to all users
Cloud Services Framework enhancements
APIs for guest messaging, new and available for all users
ISO 27001 Certification for all IBM SCE data centers
Object storage with enhanced portal integration with SCE
All the details of each new capability/enhancement can be
found on the SCE portal in the “What’s
New in SmartCloud Enterprise 2.2” document (SCE account sign-in is required
to review the document), but here are a few highlights:
IBM SmartCloud Application Services (SCAS)
IBM’s platform as a service -- IBM SmartCloud Application
Services -- runs on top of and deploys virtual resources to IBM SmartCloud
Enterprise. SmartCloud Application Services delivers a secure, automated,
cloud-based environment that supports the full lifecycle of accelerated
application development, deployment and delivery. SCAS provides an
enterprise-class infrastructure, enhanced security and pay-per-use, and allows
clients to differentiate themselves with built-in flexible options that
configure cloud their way – leading to a competitive advantage.
You can find the SmartCloud Application Services offering on
the “Service Instance” tab within your SmartCloud Enterprise account.
As a direct result of client requests, we are offering
additional flexibility and choice in Windows instance capture. Clients can now use
the “Save private image” function with or without the use of Sysprep, the
Microsoft System Preparation tool.
We invite you to learn more about all of these enhancements
via the documentation library in the SCE portal and welcome your feedback.
Thank you for your continued support!
* IBM will roll out these new
capabilities in waves beginning mid-December 2012. IBM’s platform as a service offering, IBM
SmartCloud Application Services, can be found in the “Service Instance” tab
within your SmartCloud Enterprise account.
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. Learn more: http://ibm.co/UeAl0B
The challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity. A critical piece to solving these challenges, as many organizations have already discovered, is image management. Read more: http://ibm.co/SpHTlV