of the important things to decide when you discuss Cloud Service Strategy and
Design is the consideration for a Reference Architecture. This is something that is useful to align to
as it represents the blueprint for your cloud and make the implementation risk
free.The Cloud Computing Reference
Architecture (RA) is intended to be used as a blueprint / guide for
architecting cloud implementations, driven by functional and non-functional
requirements of the respective cloud implementation. The RA defines the basic
building blocks - architectural elements and their relationships which make up
the cloud. The RA also defines the basic principles which are fundamental for
delivering & managing cloud services.
architecture is more than just a collection of technologies and products. They
consist of several architectural models and are much like a city plan.The RA defines how your cloud platform should
be constructed so that it can satisfy not you’re your current demands and but
also be extensible to support the future needs of a diverse user population. So
this blueprint should be responsive to changing business and technology
requirements and adaptable to emerging technologies. Existing “legacy” products and
technologies as well as new cloud technologies can be mapped on the AOD to show
integration points amongst the new cloud technologies and integration points
between the cloud technologies and already existing ones. By delivering best practices in a standardized,
methodical way, an RA ensures consistency and quality across development and
IBM Cloud Computing RA is structured in a modular fashion with each functional capability
(architectural elements), the user roles (that we discussed in Chapter 12) and
their corresponding interactions. The IBM CCRA is created based on several
cloud engagements and incorporates all the good practices and methods
implemented across these projects. So for an end user adopting these good
practices the risk and cost of implementation of their cloud will be low. The
CC RA is built on the ELEG ( Efficiency, Lightweightness, Economies-of-scale,
of the principles that I want to highlight here is the Genericity Principle –
That’s the capability to define and manage generically along the Lifecycle of
Cloud Services: Be generic across I/P/S/BPaaS & provide ‘exploitation’
mechanism to support various cloud services using a shared, common management
platform (“Genericity”).As we know or
discussed in the cloud delivery and deployment models (Chapter 3) there can
many models for deployment and delivery of a Cloud Services. As we know Cloud
Service can represent any type of (IT) capability which is provided by the Cloud
Service Provider to Cloud Service Consumers - Infrastructure, Platform,
Software or Business Process Services. The beauty and significance of the IBM
Cloud Computing Reference Architecture is that it can cater to any of these
service delivery and deployment models. So if you are building your private
cloud or public cloud or using cloud to deliver IAAS, PAAS or SAAS the RA
remains the same and handle all of these combinations. We have seen the
capabilities that we need (Chapter 6) for implementing a common cloud
has recently submitted
Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) to the Cloud
Architecture Project of the Open Group,
a document based on “real-world input from many cloud implementations across
IBM” meant to provide guidelines for creating a cloud environment. Check
out this link
which has the interview with Heather Kreger, one of the authors of Cloud Computing
Reference Architecture as well as the details of the components that make up
the topic there is also an article that I found on syscon cloud computing
journal which is comparing the Reference Architecture of the Big Three (
IBM, HP and Microsoft)which is an
before we get into the details of the Service Implementation / Transition phase
it is important that we understand the bigger picture. The word document IBM
Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) provides a great
description of this bigger picture and going into the details as required. The
architectural principles define the fundamental principles which need to be
followed when realizing a cloud across all implementation stages (architecture,
design, and implementation). This is a must read for all - development teams
implementing the cloud delivery & management capabilities as well as
practitioners implementing private clouds for customers.
In my previous post, we looked at understanding the
different adoption patterns – i.e. how customers are turning towards
cloud.Some of the key reasons of the
“why” are listed below
Ease of deployment
More flexibility in
supporting evolving business needs (both from a technical and business
Lower cost of
Easier way to scale
and ensure availability and performance
Overall ease of use
While all of these are good, there are
still many yet to get on to this cloud computing train. Let’s explore what are
their key concerns or challenges why they are reluctant to jump in. The
following are inputs that I’ve got from various analyst studies and resources
on the internet.
Securityand Privacy- The top most concern that everybody seem to agree
as a challenge with cloud is security. The data security and privacy
concerns ranks top on almost all of the surveys. Cloud computing
introduces another level of risk because essential services are often
outsourced to a third party, making it harder to maintain data integrity
and privacy, support data and service availability, and demonstrate compliance.
Real Benefits / Business Outcome – Though we have several case studies showcasing
the benefits arising out of implementing cloud technologies, some of the
customers are still not convinced on the possible benefits. Their main
concern is how to realize the investment to full potential and make cloud
part of their mainstream IT Portfolio.Enterprises
need to a good view into the real benefits of cloud computing rather than
the seeing the potential of cloud computing to add value. The return on
investment (ROI) on cloud needs to be substantiated by comparing specific
metrics of traditional IT with Cloud Computing solutions that can show
savings that demonstrate cost, time, quality, compliance, revenue and
profitability improvement. The cloud ROI model should include things such
as indicators for comparing the availability, performance versus recovery
SLA, Workload-wise assessments, Capex versus Opex costs benefits,
Service Quality: Service quality is one of the biggest factors that the enterprises
cite as a reason for not moving their business applications to cloud. They
feel that the SLAs provided by the cloud providers today are not
sufficient to guarantee the requirements for running a production
applications on cloud especially related to the availability, performance
and scalability.In most cases,
enterprises get refunded for the amount of time the service was down but
most of the current SLAs down cover business loss. Without proper service
quality guarantee enterprises are not going to host their business
critical infrastructure in the cloud.
Performance / Insufficient responsiveness over
network: Delivery of
complex services through the network is clearly impossible if the network
bandwidth is not adequate.Many of
the businesses are waiting for improved bandwidth and lower costs before
they consider moving into the cloud.Many cloud applications are still too bandwidth intensive.
Integration: Many applications have complex integration needs to connect to other
cloud applications as well as other on-premise applications.These include integrating existing cloud
applications with existing enterprise applications and data structures.
There is a need to connect the cloud application with the rest of the
enterprise in a simple, quick and cost effective way.
I plan to discuss more on what are the
perceived and real threats related to Security and Privacy in my subsequent
posts. In my new role, as an Architect for IBM Security Solutions,
I’ll like to discuss the details on what IBM tools and technologies you could use to overcome the issues.
Meanwhile keep those comments coming and I look
forward to them to understand what other areas you think are key
concerns to be addressed to accelerate adoption of cloud.
For the enterprises, the most attractive factor of cloud is its flexible sourcing options and the
choices of deployment. And again the different deployment and delivery models can co-exist and it is
possible to integrate with traditional IT systems and with other clouds.
Cloud Delivery Models
The key delivery models for cloud are discussed below.
Cloud refers to IT
capabilities are provided “as a service,” over an intranet, within the enterprise
and behind the firewall. Privately owned and managed. The access limited to
client and its partner network. The Private cloud drives efficiency,
standardization and best practices while retaining greater customization and
control within the organization. In a private cloud environment, all resources
are local and dedicated.All cloud
management is local.
Figure 1 Private Cloud
Public Cloud refers to IT activities / functions are
provided “as a service,” over the Internet Service provider owned and managed.
In public cloud, access is by subscription.
The public cloud delivers select set of standardized
business process, application and/or infrastructure services on a flexible
price per use basis.Multiple tenancy is a key characteristic of public
Figure 2 Public Cloud
Hybrid cloud is a combination of characteristics of
both public and private cloud where internal and external service delivery methods
are integrated. For example in the case of an Off-Premise Private Cloud, resources
are dedicated, but off-premise.Enterprise administrator
can manage the service catalog and policies.Cloud provider operates and manages the cloud infrastructure and
Figure 3 Off-Premise Private Cloud
Community cloud – This is the model where the cloud
infrastructure is shared by several organizations and supports a specific
community that has shared concerns (e.g., mission, security requirements,
policy, and compliance considerations). It may be managed by the organizations
or a third party and may exist on premise or off premise.
Public vs. Private
Overall private clouds have higher levels of consideration
than public clouds with most of the enterprises but there are various other
models that are emerging.
Figure 4 Cloud Delivery Models
We need to
balance the business benefits of increased speed and lower cost with public
cloud offerings versus the security and ownership of infrastructure and service
management considerations while choosing between a public and private cloud
offering for a capability. The governance model, resiliency, level and source
of support, architectural & management control, compliance, customization /
specialization etc are other considerations.
Public and Private Clouds are preferred for different
workloads. Many enterprises still prefer to host their traditional applications
out of their private cloud. The top private workloads include
Data mining, text mining, or
Data warehouses or data marts
Business continuity and
As and when
a workload becomes more standard and the SLAs are well established, the same
service becomes easy to consume over a public cloud.This is similar to how you can access well
defined banking functions through ATMs. Only when you need some special
services you go to your bank these days.Similarly top public workloads include
Service help desk
Infrastructure for training and
WAN capacity, VOIP
Test environment infrastructure
Data Centre network capacity
Cloud Deployment Models
All the computing related functions that clouds provide are
accessed through a service catalog and delivered as integrated services. The
different layers of IT-as-a-Service are referred to as the Cloud Deployment
Models. More details of these definitions can be found at the NIST website which
is source for some of the text below.
Figure 5 Cloud Deployment Models
Infrastructure as a Service (IaaS) is the service
delivery model where customers use processing (server), storage, networks and other
computing resources/ data center functionality.Iaas has the ability to rapidly and elastically provision and control resources.
In this model customers can deploy and run software and services without the
need to manage or control the underlying resources. The IBM Research Compute
Cloud (RC2) is an example for this model. Smart
Business Desktop on the IBM Cloud is another example for IaaS that enables
desktop virtualization with a subscription service with no upfront fees or
capital expense. Consider reading about IBM
Cloudburst if you are building your own IaaS platform.
Platform as a Service (PaaS) is the delivery model
where customers can use programming languages, tools and platforms to develop
and deploy applications on multi-tenant shared infrastructure with ability the to
control deployed applications and environments. All of these again can be done without
the need to manage or control the underlying resources. IBM BPM BlueWorks provides
tools to build your own business process. WebSphere
Cloudburst is also something for you to look at if you building a PaaS
Software as a Service (SaaS) is the popular model
where customers use applications (Eg, CRM, ERP, E-mail) from multiple client
devices through a Web browser on multi-tenant and shared infrastructure without
the need to manage or control the underlying resources. An example of this
model is IBM lotuslive.
Business Process as a Service (BPaaS) is an emerging
model where customers can consume business outcomes (Eg, payroll processing,
HR) by accessing business services via Web-centric interfaces on multi-tenant
and shared infrastructures.Smart Business Expense Reporting on the IBM
Cloud is one of the offerings in this category.
Let’s start the first module with trying to understand and
define the term Cloud Computing in its details.It is comprised of two words – Cloud and Computing.So simply put it is computing that you can
offer on the cloud.What’s the Cloud
referred here? The term "cloud" is used as a metaphor for the
Internet, based on the cloud drawing used in the past to represent the network.The computing could be any goal-oriented
activity requiring, benefiting from the usage of Information Technology that
includes hardware and software systems used for a wide range of purposes;
processing, structuring, and managing various kinds of information;
There are several definitions that you can find on the web
for cloud computing.
National Institute of Standards and Technology (NIST),
Information Technology Laboratory has been promoting the effective and secure
use of cloud computing technology within government and industry by providing
technical guidance and promoting standards.
Definition - Cloud computing is a pay-per-use model for enabling
available, convenient, on-demand network access to a shared pool of
configurable computing resources (e.g., networks, servers, storage,
applications, services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction.
- Cloud computing is Internet-based computing, whereby shared resources,
software, and information are provided to computers and other devices on
demand, like the electricity grid.
Internet-based computing was always available. So what’s
different now?The different is Cloud
computing is a paradigm shift. Cloud computing is a new consumption and
delivery model inspired by consumer internet services. Cloud computing is still
an evolving paradigm. But in general most of the companies involved with cloud
have agreed on certain general characteristics or essentials that qualify any
internet-based computing to be referred to as a cloud. They are the following
On-demand self-service - A consumer can unilaterally
provision computing capabilities, such as server time and network storage, as needed
without requiring human interaction with each service’s provider.
Ubiquitous network access - Capabilities are
available over the network and accessed through standard mechanisms that
promote use by heterogeneous thin or thick client platforms (e.g., mobile
phones, laptops, and PDAs).
Location independent resource pooling - The
provider’s computing resources are pooled to serve all consumers using a
multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand. The customer generally
has no control or knowledge over the exact location of the provided resources.
Examples of resources include storage, processing, memory, network bandwidth,
and virtual machines.
Rapid elasticity - Capabilities can be rapidly and
elastically provisioned to quickly scale up and rapidly released to quickly
scale down. To the consumer, the capabilities available for rent often appear
to be infinite and can be purchased in any quantity at any time.
Pay per use - Capabilities are charged using a
metered, fee-for-service, or advertising based billing model to promote
optimization of resource use. Examples are measuring the storage, bandwidth,
and computing resources consumed and charging for the number of active user
accounts per month. Clouds within an organization accrue cost between business
units and may or may not use actual currency.
The intent of this blog is not to duplicate the content from
other web sites into this article. But provide a means to navigate through a
variety of resources that are available and take a structured approach to
understanding the term.Once we have
understood this basic definition, let’s look at other resources for further
·Is Cloud Computing same as
·Where can I learn more about Cloud
·What types of application can run
in the Cloud?
Computing Primer -Part 1 – This
white paper recommended as one of the resources for the Cloud Computing
Certification discusses the definition in detail. Beyond the definition, it
discusses the cloud computing context and how is it different from current
hosted services. Virtualization plays a key role for meeting some of the
characteristics of cloud like Elasticity and Scalability, Workload Migration
and Resiliency. This article discusses Virtualization and its effect on cloud
is computing. The article further tries to burst some common myths about cloud
computing should satisfy all the requirements specified: scalability, on
demand, pay per use, resilience, multitenancy, and workload migration.
Cloud computing is useful only if you are outsourcing your IT functions to
an external service provider.
computing requires virtualization
computing requires you to expose your data to the outside world.
networks are essential to cloud computing
To get an overview best is to start with these excellent 3 to 4 minute videos onintroduction to the basics of cloud computing
from common craft and rPath – Cloud Computing in
Plain English and Cloud
Computing Plain and Simple. Cloud
Computing Explained is another simple video that explains Cloud Computing
in a way that everyone can understand! You can find many videos on Youtube if you search for cloud
computing. But the best that I liked is this one where a Dad is explaining Cow
computing – I mean Cloud Computing to his daughter. Check it out.
share is another good place where I found there are some very interesting
presentations on cloud.
If you haven’t signed up yet, be sure to check out the October cloud computing for developers virtual event. Participants in this two-day event will learn how to leverage the power of the cloud to tackle the toughest business and technical challenges! This two-day event will be packed with real-world examples and live demos of techniques and products – and you’ll see it all without leaving your desk. It's going to be exciting to have you all there with us getting smarter learning new technical skills to prepare us all for a smarter planet.
Here's some of what's in plan for the event. Remember that you can ask as many questions as you wish to our team of experts about any of our sessions.
IBM technical experts will kick off the event on day 1 with a session on the IBM development and test cloud and you'll see the cloud in action in a live demo. Our experts will discuss use cases and scenarios that will help you as you develop and test in the cloud.
Next we'll discuss a roadmap on how you and IBM can move your application to pattern-based middleware and why infrastructure-as-a-service alone is not enough to reduce implementation challenges when making the move to software-as-a-service.
Then you will learn how IBM's new Cast Iron Cloud Integration Platform has helped hundreds of customers just like you connect their cloud and on-premise applications in just days with its 'configuration, not coding' approach. You will see an engaging live ERP to cloud CRM demo.
The final day 1 session will demonstrate how to efficiently package middleware and/or applications so that they can be easily deployed into dynamic "cloudified" IT infrastructure. Techniques addressed in this session will include Anatomy of an Open Virtual Appliance, OVA repository and lifecycle, single and multi-image OVAs, best practices and examples of OVF.
That's not all folks, remember we have a full set of sessions on the 2nd day to. Remember, you'll have to register separately for day 2.
We'll start the day off showing you how solutions such as eXtreme Scale can scale the database layer. And you'll learn how eXtreme Scale and XC10 help solution-wide HTTP session management, and the WebSphere Application Server dynamic cache service for page fragments.
Ever wondered why iSeries may be an ideal platform for cloud computing? The next session will show you how iSeries has been architected for applications that can be delivered in a hosted or SaaS environment, drilling down into the capabilities that make IBM iSeries well suited for SaaS.
I'm sure you will not want to leave before you hear best practices for designing databases for multitenancy and resiliency which is the topic of the next session. Learn about use cases of AWS and DB2 instances, database schemas as well as a demonstration of setting up HADR in the cloud.
We'll wrap up with a final session examining some technical considerations associated with building a secure application in a cloud environment and then discuss how they can be addressed with IBM products including DataPower, TFIM, TSIEM and TSPM.
We are giving you a choice. Choose the 2-day event best suited to you depending on where you are in the world. Both events will have very similar sessions. Register for the event that is best timed for North American (October 12-13) or European (October 26-27) time zones.
We had our first meeting of the IBM Cloud Certification Study Group yesterday.The objective of the study group is to pass the IBM Certified Solution Advisor Cloud Computing Architecture V1 certification exam I wanted to thank all the group members who attended and shared their ideas on how to study for the certification exam. We had groups members participate from all over the globe, from Sweden, India, North America and Australia. If you couldn't make it ,have no worries we'll arrange another meeting in a couple of weeks time. Please feel free to join us.
During our meeting we decided on a strategy of " Divide and Conquer" in our approach to studying for the exam. By this I mean, take advantage of each individuals strengths and share it with the group. One group member might be well versed on Cloud Security and another might be proficient on SaaS. The idea is to get together and share our knowledge.
During our meeting we covered the following:
Key areas of competency for the Cloud Solution Advisor certification
We've recorded our first session and if you'd like to watch the replay, it can be viewed here. PDF presentation files of the meeting are located here. We've also posted a couple of activities to complete prior to our second meeting. Those are located under the activities section of the group. If you'd like to be notified when we add additional activities let me know and I'll add you to the list.
I'm really looking forward to working with the study group and ultimately becoming a IBM Certified Solution Advisor too.
Once you have installed and setup your management platform,
we are ready to start with designing and delivering the cloud services using
SOA & Cloud
We use the same principles of Service Oriented
Modeling and Architecture (SOMA) that links business intent with its
realization through IT for Cloud Services modeling as well.In SOA, we use the business process models to
understand a series of sequentially organized business activities, events that
trigger them,roles that perform them,
inputs, outputs, control points, etc…As
discussed in the Service Strategy section, we look to design the Cloud Services
which are better aligned to business requirements
As in SOA, for service identification and design one could
take any of the following approach.
in the Middle
In a top-down approach development generally usually starts
with a high-level business and structural modeling of the service. Then you
also define the management processes that are required service to be in
operation. The top-down approach is further characterized in that no or only
few automation or fulfillment assets exist when starting with the solution
design. Design and implementation of those assets, including their interface
and granularity, will be driven primarily from the high-level automation model.
The advantage of the top-down approach is a clear design of the service to be automated,
including the structural and operational model.
The bottom-up approach is usually characterized by a large
number of automation assets that already exist. This may be in the form of many
scripts or workflows already exists. In bottom-up approach, we take these low
level assets and abstracting them as a cloud service.
Practically we might go with a combination of both
approaches mentioned above as the meet-in-the-middle approach.
We model the service so we could learn, capture, and
abstract details about “things,” their structures, relationships between them
and, often, their behaviors (collaborations, states).All the factors that we consider during
modeling a service in SOA are very much applicable for a cloud service too.
These include but not limited to
Portfolio ( in the case of cloud often referred as service catalog)
Now lets discuss the same from the Service Management / ITIL perspective. Cloud services have a lifecycle that maps to this service
The Service Design phase includes the service definition,
creation of the service and registering the same into a catalog.We will look at how these can be done using Tivoli
Service Automation Manager in the next Chapter.
Service Design is a critical step that delivers the
service delivery with agreed and well understood qualities
expenses follow level of value creation
investments follow business demand and revenue generation.
As discussed in Chapter 5, IBM Integrated Service
Management provides the software, systems, best practices and expertise
needed to manage infrastructure, people and processes—across the entire service
chain—in the data center, across design and delivery, and tailored for specific
industry requirements. The Service Management Goals are the following
- The ability to see everything that’s going on
across the infrastructure.
- The ability to keep the infrastructure in its
desired state by enforcing policies.
- The ability to manage huge and growing
infrastructures while controlling cost and quality.
These principles and goals are the same for Cloud
Service Management as well. End to End Service Management includes the
Cloud Maturity and Readiness
Cloud Service Strategy is mainly
about deciding what services do we want to deliver and how do we ensure
competitiveness of providing the same through cloud. Today’s clients are
seeking to utilize their assets to enable business innovation. The service
strategy is all about choosing from across multiple compute / deployment
models.We needed to access current IT infrastructure
and need to identify and evaluate the set of capabilities for their readiness
to move to cloud.
Selecting between the Cloud Deployment Models
For mission critical workloads that
drive business innovation a private cloud is preferred. For secondary workloads
and supporting business functions a public cloud is suitable. While public
cloud delivers select set of standardized business process, application and/or
infrastructure services on a flexible price per use basis focused on utility,
the private cloud drives efficiency, standardization and best practices while
retaining greater customization and control with focus on innovation.
When doing Service Strategy, you
need to consider the expertise across industries and standards. At this Service
Strategy phase, we normally consider reusing/leveraging solutions based on
industry best practices including ITIL,
Calculating the ROI
Cloud Computing ROI is the important consideration/step
during the Service Strategy phase. This includes you verifying the following
fundamental aspects related to making a service available on the cloud.
Level Agreements (SLA)
/ Compliance Requirements
There are several ROI frameworks and methods available
that allows you validate the approach/strategy against these three fundamental
aspects.Most of the service companies
would have their own frameworks which are typically Intellectual Capital of
their service teams.
Choosing the right
Delivery Models and Workloads
Based on the Enterprise Architecture approach, we need
to choose from the many available options of delivery models and work load.
This includes the services and consulting engagement to obtain clarity on
business drivers (business vision, strategy, timeline, business model, and
business operating model) and how they can leverage technology and value
enablers from cloud computing. Then in
this cycle you also need to identify the right set of workloads to move to
cloud that fetches maximum benefits from cloud computing.The flexibility that the business operating model
gets to innovate on the business model is another key consideration.This could be iterative effort of identifying
candidates and then slowly moving them to production.
Innovation - Dramatically improve business value and IT’s effect on
time-to-market by enabling the business workloads to rapidly and
accurately be deployed on multiple platforms when and where they are
operational expenses – Gain productivity increases in IT labor costs
through automation of rapid provisioning.
Have you checked out the features in the new release of the IBM Smart Business Development and Test on the IBM Cloud? Well you should. Version 1.1 provides support for Virtual Private Networks and Virtual Local Area Networks plus new premium support services are now available. I've heard from my tweeps on Twitter that the new release rocks so had to share the news with all of you in our very cool developer community.
Okay so if you want to realize faster application deployment with reduced costs, you have to check out the IBM Cloud. You virtually have no infrastructure to maintain and benefit from pay-as-you-go pricing. And, you can set up more accurate test environments in minutes versus weeks using standardized configurations. Sound irresistible?
So you ask, what does this new release really mean for me as a developer? Well here's a quick summary of what Version 1.1 has to offer:
Security is a top priority, you can now use a VPN to access your machine instances on the IBM Cloud to provide virtual network isolation of your instances. Each VPN service consists of a private virtual LAN (VLAN) in an IBM Cloud Center of your choice plus a VPN gateway for accessing that VLAN. Pretty cool!
In addition, the VPN option allows isolation of your development and test environment on the IBM Cloud on a VLAN that only you can access. Plus your instance is not accessible from the Internet or from other instances unless you have provisioned them to use your private VLAN. Very secure.
New premium support services have been added. On top of the existing tech support, you may also purchase premium levels of support that include around-the-clock telephone support and a web-based ticketing system to submit and review service requests plus remote technical support to assist you in the use of the Cloud web portal, access to services, instance creation, and image management functions within the portal. And you have the ability to add Linux operating support for Linux OS provisioned through the Cloud web portal, including support for virtual machine instances. This is really awesome.
A cloud is not a cloud if it is not elastic. The elastic
property of the cloud to expand and shrink based on demand is possible only
with a proper capacity planning. I feel the most difficult exercise to do while
making a cloud solution is capacity planning for your cloud.By this, I mean you have to size
managed environment as well as
Most of the engagements that I’ve walked into might have
some capacity or infrastructure that they want us to leverage and use it in the
cloud.So the comparison becomes
difficult if you don’t have a standard measuring unit for your infrastructure –
for instance how do you know a Quadcore
on an intel platform compares to power7 core. So I found a good explanation in
this guide, in this interesting article –
The answer to the difficult question was to use something
called the cloud CPU unit which is
nothing but the computing power equal to the processing power on a one
gigahertz CPU. When a user requests two CPUs, for example, they will get the
processing power of two 1 GHz CPUs. This means that a system with two CPUs,
each with four cores, running at 3 GHz will have the equivalent of 24 CPU units
(2CPUs x 4Cores x 3GHz = 24CPU Units).
The other dimension of the complexity is to determine the
resource needs and do the trends and forecasting. I typically collect the
projections from the clients and then put down some critical assumptions to
determine how big my cloud should be. Some critical questions that I typically
many concurrent users and peak users and what percentage of these users
needs to be covered?
type of workloads they typically run – development, test ?
image attributes – mem, cpu, storage etc
infrastructure planner for cloud made life easy for me that had a user
friendly interface to take me through these steps and arrive at a sizing for
the managed environment. Once we know
the managed environment, we can make
the sizing of the management platform. The details of how to plan the managed
environment, I’ll discuss in my next post.
I’ll be interested in putting together the top 10 parameters
that are critical for sizing the cloud managed and management environment. Look forward to your comments.
The IBM Tech Trends report is out! We asked, you answered. Check out the results of IBM developerWorks' 2011 Tech Trends survey and find out what more than 4,000 IT professionals -- your peers -- have to say about the future of technology, including their opinions on cloud computing, business analytics, mobile computing, and social business.
The report provides insight from the worldwide IT development community into the adoption, preferences and challenges of key enterprise technology trends including cloud, business analytics, mobile computing, and social business. The results also provide guidance on areas where IT professionals like you say they need help with skills to develop new technologies and platforms that will be in demand in the coming years.
As we focus in on cloud, there is absolutely a growing trend in cloud computing to view it as more than just cheap infrastructure. Companies are now exploring the possibility of developing applications in the cloud (you guys are already doing that) many of them related to mobile development.
Currently the biggest challenge is integrating the cloud into application development as the reduction of operating expenses is the driver of this move. We still have a way to go however with 40% of the survey responders saying their company is not yet involved in cloud currently. Hmm, interesting right.
The cool news is that the expectation from those same responders is that over the next two years 75% of the IT professionals responded that they expect that this will change and that theirs and other enterprises will take to building cloud infrastructure.
Chapter 6 - Multiple Entry Points to Deploy and manage Cloud Based Services
Cloud Service Management capabilities are needed to
enable visibility, control and automation of cloud services. IBM provides the
following open standards based integrated capabilities to implement service
management for the cloud.
hardware, software and services optimized for cloud
If you are looking for A la carte software offering/solution for maximum flexibility, you
start with IBM Tivoli Service Automation
Manager.This flexible solution
supports user driven service requests and automated resource deployment. The key capabilities
Self service User Interface for
Service Requests for improved responsiveness and efficiency
Workflow support to manage the
process for approval of usage
Provisioning – Automatesprovisioning of resources / IT resource
deploymentfor efficient operations
and to address fluctuating business requirements
with existing hardware to
leverage available resources and previous investment
Delivery Manager (ISDM) is a new offering which is pre-configured
management solution optimized for managing virtual environments and cloud
deployments.Like Tivoli Service
Automation Manager this again is also“software only” offering.In addition to the IBM Tivoli Service
Automation Manager features ISDM includes the additional capabilities
Pre-integrated solution, delivered
as virtual images for faster installation and time to value.
Monitoring to provide Visibility
of Performance of Virtual Machines
Usage and Accounting tracking for
Server ready for High Availability
Energy Management for tracking and
optimizing operational costs
IBM CloudBurst compared
to Tivoli Service Automation Manager and ISDM not only has the software
solution optimized for cloud but also ships the integrated hardware. In
addition to what was provided by its sibling offerings, IBM Cloudburst provides
the following capabilities.
Self-contained solution (managed from
and to environment) to accelerate cloud deployments
Pre-integrated solution bundled with
HW, SW, storage, network and QuickStart services for fastest time to
Thus the three offerings are designed for specific
purposes and selecting the right solution is based on the requirement. You can pick
from the following list and depending on what all you need, it is easy to
select the solution that meets those requirements.
Automation and Provisioning
Storage, Network Hardware
Quite often people are interested to know about IBM WebSphere
CloudBurst and how it is different from the three discussed above. While IBM
CloudBurst and WebSphere CloudBurst are both appliances that accelerate
time-to-value and reduce costs they are designed for two distinct purposes.
CloudBurst is a general-purpose cloud solution. It enables users to
virtualize, deploy, manage, and monitor highly heterogeneous workloads in
their private cloud. IBM CloudBurst is a pre-packaged cloud with
integrated blades, storage, network switches, and software management
CloudBurst is purpose-built to enable users to create, deploy, and manage
private clouds created from IBM Hypervisor Edition images and patterns. IBM
WebSphere CloudBurst delivers specialized WebSphere knowledge in the form
of pre-configured, optimized WebSphere patterns and images. WebSphere
CloudBurst is a cloud management device: 1U appliance that manages a private or on-premise
cloud. It requires supporting infrastructure (hypervisors, storage, and
networking) and virtual images.
Their integration augments the value of each offering
with IBM CloudBurst enabling end-to-end service request governance for
WebSphere CloudBurst provisioning and users still able to leverage a single portal
for cloud service requests forrapid and
optimized provisioning of virtualized WebSphere systems
Driven by trends in the consumer internet, cloud computing
is becoming the new way to consume and deliver IT services.As an IT Professional, we need to understand
the different aspects of cloud to seize this opportunity to grow our career and
serve our clients towards a successful adoption of cloud computing.
I’m in the process of learning several aspects of cloud -
emerging trends in cloud solutions, workloads, infrastructure, technologies and
modern services industry.So thought of
this idea to post my learning as a series of blogs which any cloud enthusiast can
benefit to understand cloud computing.When
discussing a topic, instead of reinventing the wheel lets build the content
with links to different articles for further reading that can provide for a
The articles shall cover the entire lifecycle of a cloud
project covering various aspects right from the business requirements,
Architecture /Design, Implementation to Operations. The intention of this blog
is provide the reader a step by step any one or more of the following broad
range of topics
of Cloud Computing
Delivery Models - Infrastructure as a Service, Platform as a Service,
Software as a Service, Business Process as a Service
Deployment Models - Private Clouds, Public Clouds, Hybrid Clouds, Industry
Management - Asset Management, Business Resiliency, Service Management, Capacity
Planning, Charging models and economics,Usage Reporting, Billing &
Metering, Provisioning, Monitoring
We will have something to learn for every week and will
dedicate each week for understanding one of the above topics. So by the end of
16 weeks that we have remaining for the year, we would have learned all the
steps to walk on cloud.The comments to
these posts from all of the members would definitely go a long way in getting
our step right and enriching the content. So C’mon everyone, lets take a walk
in the clouds – step by step…
Chapter 7 - IBM Tivoli Service Automation Manger – Architecture Overview
Each of the integrated capabilities required to
implement service management for the cloud is provided by IBM
Service Automation Manager (referred as TSAM in this chapter).TSAM supports the cloud through all the
phases of the entire service lifecycle. The steps include
For supporting these phases it provides the following
provisioning and scheduling
On-boarding throughautomation of
& Complete Lifecycle service
Each of these capabilities are delivered by discrete
components within TSAM
provides a Web 2.0 Interface which provides the service offering /
catalog. (external UI)
A quick view of the architecture will help you
understand that how these capabilities are provided by seamlessly by multiple
components underneath TSAM.
Figure 1 Tivoli Service Automation
Manager - Architecture Overview
Below are the key components and responsibilities
with end user
to Service Catalog
parameters for service requests
Tivoli Service Request Manager
end-user services through offerings
and notifications on business level
reservation of resources
Tivoli Service Automation Manager (Service Design)
by management plans
governance including error handling by admin
plan fulfillment by executing TPM workflows/LDOs
Even though I would like to go into details on each component as part of this post, I'm not going to do so because as discussed in the initial post, the objective of this blog is to
provide the reader with the pointers to the content they need and not to repeat the same already
available elsewhere. So you can read more about the TSAM
Architecture on the TSAM
wiki on developerworks.
I’m including the list of software bundles for TSAM 7.2.1 to
get a better understanding of the components involved.
Service Automation Manager
Automation Manager (Base)
Service Request Manager®
Tivoli Service Request Manager
7.2.0 with Fix Pack 1 (184.108.40.206)
Tivoli Provisioning Manager
Package includes the base services and middleware,
Tivoli Monitoring (optional)
IBM Tivoli Monitoring (optional)
6.2.1 or 6.2.2
Base services (Maximo®)
(included with Tivoli Provisioning Manager)
Directory Server (LDAP)
WebSphere Application Server Network Deployment
220.127.116.11, 18.104.22.168 on SUSE Linux® 11
IBM HTTP Server
22.214.171.124, 126.96.36.199 on SUSE Linux® 11
Again, the TSAM
infocenter provides more details on each of the typical hardware and
software requirements and the related topics.
As part of the first two parts of this series we have tried
to define the term “cloud computing”.Having understood what it is, let us now try to look at how and cloud
computing is gaining importance now.
As the world is becoming more interconnected, infrastructure
needs to become dynamic to bring together business and IT. Growth of
instrumentation, interconnection and intelligence in the world is driving the
emergence of IT and business services and the requirement for service
management systems. To create such a
dynamic infrastructure, the customers (businesses) are looking for following
have to worry about the full IT capacity they need at peak time.
only for what they actually use. They do not have to buy servers or
capacity for maximum use. i.e. Move to a reduced Capex (Capital Expense) model
with leveraging the economies of Opex (operating expense) for IT
allocation and de-allocation of resources or semi-automatically on demand
If you research on how the business can address or acquire
the above capabilities, cloud computing seems to be holding to the key answers
to the above considerations. An effective Cloud Computing deployment is built
on a Dynamic Infrastructure and is highly optimized to achieve more with less leveraging
virtualization, standardization and automation to free up budget for new
Computing is a new IT consumption and delivery model for businesses that makes
the above capabilities a reality.
AConsumption model: new user
experience and a business model
Standardized SERVICES offerings
Ease of access
Computing and Delivery model:
Integrated Service Management
Progression toward transformation starts with optimizing
existing assets/processes and leverages best in class technology at transitions.
Each step balances improvements in efficiency and effectiveness and can be measured
by business returns. However, an organization can move to cloud systematically
taking one step at a time, or they can move right to a cloud deployment if it
aligns best with their strategic vision for the business.
Readying the infrastructure requires the implementation of a
Dynamic Infrastructure:consolidate your
servers and storage, implement virtualization technologies to increase
utilization, standardizing your processes for operational efficiency, automating
procedures for a more flexible delivery and enabling clients for
self-service.Then you identify common
workloads and set up shared resources, and finally, to achieve a true
cloud-enabled environment, clients must be able to provision the workloads in a
to cloud consumption and delivery model is like a big transformation effort. So
before taking this long journey, it is important to understand the typical use
cases, workloads that you can move to cloud and the associated ROI.
Cloud Business Use Cases
One of the
earliest groups to take a step towards identifying some of these use cases is
Computing Use Cases Workgroup on google groups. This collaborative
effort of cloud consumers and cloud vendors has put out a white paper that
discusses some of the basic definitions. The paper further discusses the
various Use Case Scenarios from a Delivery and Deployment model perspective. The
white paper is in its fifth iteration were the group members are now discussing
what and how about “moving to the cloud”. The current version of the paper can
be found here.
effort on the subject on use cases from a business perspective is “Strengthening your
Business Case for Using Cloud” whitepaper from the open group.I was also one of the key contributors to this
effort. This White Paper incorporates a unique collection of Cloud business use
cases, findings, and conclusions that can help executives and business process
owners make the appropriate Cloud investment decisions. By describing
real-world granular business problems, requirements, and analysis of the value
and business implications of Cloud computing, reading this paper will equip you
with the necessary business insights to justify your path for using Cloud.
consideration is that the adoption of cloud computing will be workload
The delivery model (public, private or hybrid) selection
depends on the workload. The research studies by IBM indicate that the
different types of workloads that could be delivered internal with a private
cloud or on a fully shared environment on a public cloud are the following.
Database- and application-oriented workloads emerge as most
appropriate for private workloads where as Infrastructure workloads emerge as most
appropriate for the public cloud.
Most customers want to start with something under their
control and behind their firewalls.So
the tremendous interest today among businesses is for private clouds – in both
Large Enterprises and the Mid-market.There is also great interest in public cloud
services – especially with smaller clients for infrastructure services. As businesses
become more comfortable moving workloads to public clouds more domain applications
will become available on the cloud. This will also result in a proliferation of
hybrid clouds as businesses integrate their private cloud environments with
public cloud services.
Benefits of Cloud Computing
The analysis of these use cases as well as what is discussed
in the open group whitepaper, point to the following benefits of using Cloud
to dynamically source and consume IT services
(infrastructure, platforms, software, and business services) on a demand
use basis – an instantly secure and managed service provisioning process
to move/abstract the service complexity off-premise to provide more
efficient availability, resilience, and security patching
agility, ability to adjust to business requirements and market
forces on demand
risk management through improved business resiliency
pricing model, eliminating cost of excess capacity
and flexible service for users, enabling self-service
requests and delivering services more rapidly, with fewer errors, and
based on requested qualities of service or SLAs
time to marketand acceleration of innovation projects
costs, both capital and operational expenditures
up skilled resources to focus on high value work and innovation
Significantly improve energy
efficiency and reduce idle time
Cloud Deployment and Delivery Models
There are multiple delivery and deployment models that cloud
computing supports to deliver the promised capabilities. This choice and
flexibility of having different deployment delivery models is the key to
success of Cloud Computing platform. The cloud flexible delivery models include
Standard Cloud service types are emerging and guiding the IT
Industry development. The different deployment models are
as a Service (IaaS)
as a Service (PaaS)
as a Service (SaaS)
Process as a Service (BPaaS)
multiple deployment and delivery models can co-exist and it is possible to integrate
with traditional IT systems and with other clouds.We will discuss them in detail in the
Chapter 11 – Self Service Portal
& Service Catalog
One of the key aspects of cloud service management is the
automation to ensure that you can manage huge and growing infrastructures while
controlling cost and quality. To attain this goal, we need a Self Service
Portal and a Service Catalog. Results show that with these components in place
the wait time for services have decreased by an average 98%.
Traditional processes would require you to fill out a paper
and put it through the approval processes. Finally the capex is approved and
the order is placed for the hardware and software.Also you will be required to constantly
followup with the IT Provider teams to know the status of the hardware/software
availability, their installation and provisioning, etc.Most often even if all the details are
provided correctly upfront, there are chances of errors in the hardware and
software provisioning as the process is manual.
With the Self-Service Portal these requests and their
tracking are automated.You can track
the status of the workflow Online. Ask for services when you need them and most
of it is provisioned automatically through workflows implemented. There is less
chance for error and faster provisioning with Self-Service Portal and the
Thus the Self-Service GUI allows end users to request IT
Resources and optionally automatically fulfill that request.
Tivoli Service Automation Manager provides a set of
pre-defined services for Virtual Server Management. These are available as part
of a service catalog that is accessible to end user through the Self-Service
Virtual Server Management functionality addresses a long-standing need by
data centers to efficiently manage the self-service deployment of virtual
servers and associated software. Using a set of simple, point-and-click tools,
an end user can select a software stack and have the software automatically
installed or uninstalled in a virtual host that is automatically provisioned.
These tools integrate with IBM Tivoli Service Request
Manager to provide a self-service portal for reserving, provisioning, recycling,
and modifying virtual servers, and working with server images, in the following
platform environments in a virtualized non-production lab (VNPL). This
functionality ensures the integrity of fulfillment operations that involve a
wide range of resource actions.
These capabilities enable you to achieve incremental value
by adopting a self-service virtual server provisioning process, growing and
adapting the process at your own pace, and adding task automation to further
reduce labor costs around defined provisioning needs.
Before users in the data center can create and provision
virtual servers, administrators perform a set of setup tasks, including
configuring the integration; setting up the virtualization environments managed
by the various hypervisors and running a Tivoli Provisioning Manager discovery
to discover servers and images across the data center.
After this initial setup has been completed, the
administrator associates the virtual server offerings with Tivoli Provisioning
Manager virtual server templates. In addition, the Image Library is used as the
source for software images to be used in provisioning the virtual servers.
Data center users who have Cloud Admin rights can use the
Service Automation Manager Offering Catalog application to create and provision
virtual server deployments.
The Offering Catalog application contains all the
offerings that are available to the end user. There are steps that you need to
perform on the catalog that will make specific offerings visible to specific
end user groups.The end user interface
is a Web 2.0 interface which can be edited to expose it via a Service Catalog.
The Web 2.0 UI is designed in an extensible, modular way that allows for
programmatically extending it.
Tivoli Service Automation Manager defines security groups
that are used to provide role-based functions that can be performed via the
administrative user interface or the self-service user interface. We will
discuss the User access management for the Self-Service Virtual Server
Provisioning component in the next chapter.
Load Balancers Are Dead: Time to Focus on Application Delivery 2 February 2009 Mark Fabbi Gartner RAS Core Research Note G00164098 When looking at feature requirements in front of and between server tiers, too many organizations think only about load balancing. However, the era of load balancing is long past, and organizations will be better served to focus their attention on improving the delivery of applications.
Overview This research shifts the attention from basic load-balancing features to application delivery features to aid in the deployment and delivery of applications. Networking organizations are missing significant opportunities to increase application performance and user experience by ignoring this fundamental market shift.
Enterprises are still focused on load balancing.
There is little cooperation between networking and application teams on a holistic approach for application deployment.
Properly deployed application delivery controllers can improve application performance and security, increase the efficiency of data center infrastructure, and assist the deployment of the virtualized data center.
Network architects must shift attention and resources away from Layer 3 packet delivery networks and basic load balancing to application delivery networks.
Enterprises must start building specialized expertise around application delivery
What you need to Know IT organizations that shift to application delivery will improve internal application performance that will noticeably improve business processes and productivity for key applications. For external-facing applications, end-user experience and satisfaction will improve, positively affecting the ease of doing business with supply chain partners and customers. Despite application delivery technologies being well proved, they have not yet reached a level of deployment that reflects their value to the enterprise, and too many clients do not have the right business and technology requirements on their radar.
Analysis What's the Issue? Many organizations are missing out on big opportunities to improve the performance of internal processes and external service interactions by not understanding application delivery technologies. This is very obvious when considering the types of client inquiries we receive on a regular basis. In the majority of cases, clients phrase their questions to ask specifically about load balancing. In some cases, they are replacing aged server load balancers (SLBs), purchased before the advent of the advanced features now available in leading application delivery controllers (ADCs). In other cases, we get calls about application performance challenges, and, after exploring the current infrastructure, we find that these clients have modern, advanced ADCs already installed, but they haven't turned on any of the advanced features and are using new equipment, such as circa 1998 SLBs. In both cases, there is a striking lack of understanding of what ADCs can and should bring to the enterprise infrastructure. Organizations that still think of this critically important position in the data center as one that only requires load balancing are missing out on years of valuable innovation and are not taking advantage of the growing list of services that are available to increase application performance and security and to play an active role in the increasing vitalization and automation of server resources. Modern ADCs are the only devices in the data center capable of providing a real-time, pan-application view of application data flows and resource requirements. This insight will continue to drive innovation of new capabilities for distributed and vitalized applications.
Why Did This Happen? The "blame" for this misunderstanding can be distributed in many ways, though it is largely history that is at fault. SLBs were created to better solve the networking problem of how to distribute requests across a group of servers responsible for delivering a specific Web application. Initially, this was done with simple round-robin DNS, but because of the limitations of this approach, function-specific load-balancing appliances appeared on the market to examine inbound application requests and to map these requests dynamically to available servers. Because this was a networking function, the responsibility landed solely in network operations and, while there were always smaller innovative players, the bulk of the early market ended up in the hands of networking vendors (largely Cisco, Nortel and Foundry [now part of Brocade]). So, a decade ago, the situation basically consisted of networking vendors selling network solutions to network staff. However, innovation continued, and the ADC market became one of the most innovative areas of enterprise networking over the past decade. Initially, this innovation focused on the inbound problem — such as the dynamic recognition of server load or failure and session persistence to ensure that online "shopping carts" weren't lost. Soon, the market started to evolve to look at other problems, such as application and server efficiency. The best example would be the adoption of SSL termination and offload. Finally, the attention turned to outbound traffic, and a series of techniques and features started appearing in the market to improve the performance of the applications being delivered across the network. Innovations migrated from a pure networking focus to infrastructure efficiencies to application performance optimization and security — from a networking product to one that touched networking, server, applications and security staff. The networking vendors that were big players when SLB was the focus, quickly became laggards in this newly emerging ADC market.
Current Obstacles As the market shifts toward modern ADCs, some of the blame must rest on the shoulders of the new leaders (vendors such as F5 and Citrix NetScaler). While their products have many advanced capabilities, these vendors often undersell their products and don't do enough to clearly demonstrate their leadership and vision to sway more of the market to adopting the new features. The other challenge for vendors (and users) is that modern ADCs impact many parts of the IT organization. Finally, some blame rests with the IT organization. By maintaining siloed operational functions, it has been difficult to recognize and define requirements that fall between functional areas.
Why We Need More and Why Should Enterprises Care? Not all new technologies deserve consideration for mainstream deployment. However, in this case, advanced ADCs provide capabilities to help mitigate the challenges of deploying and delivering the complex application environments of today. The past decade saw a mass migration to browser-based enterprise applications targeting business processes and user productivity as well as increasing adoption of service-oriented architectures (SOAs), Web 2.0 and now cloud computing models. These approaches tend to place increased demand on the infrastructure, because of "chatty" and complex protocols. Without providing features to mitigate latency, to reduce round trips and bandwidth, and to strengthen security, these approaches almost always lead to disappointing performance for enterprise and external users. The modern ADC provides a range of features (see Note 1) to deal with these complex environments. Beyond application performance and security, application delivery controllers can reduce the number of required servers, provide real-time control mechanisms to assist in data center virtualization, and reduce data center power and cooling requirements. ADCs also provide simplified deployment and extensibility and are now being deployed between the Web server tier and the application or services tier (for SOA) servers. Most ADCs incorporate rule-based extensibility that enables customization of the behavior of the ADC. For example, a rule might enable the ADC to examine the response portion of an e-commerce transaction to strip off all but the last four digits of credit card numbers. Organizations can use these capabilities as a simple, quick alternative to modifying Web applications. Most ADCs incorporate a programmatic interface (open APIs) that allows them to be controlled by external systems, including application servers, data center management, and provisioning applications and network/system management applications. This capability may be used for regular periodic reconfigurations (end-of-month closing) or may even be driven by external events (taking an instance of an application offline for maintenance). In some cases, the application programming interfaces link the ADC to server virtualization systems and data center provisioning frameworks in order to deliver the promise of real-time infrastructure. What Vendors Provide ADC Solutions Today? During the past five years, the innovations have largely segmented the market into vendors that understand complex application environments and the subtleties in implementations (examples would be F5, Citrix NetScaler and Radware) and those with more of a focus on static feature sets and networking. "Magic Quadrant for Application Delivery Controllers" provides a more complete analysis and view of the vendors in the market. Vendors that have more-attractive offerings will have most or all of these attributes:
A strong set of advanced platform capabilities
Customizable, extensible platforms and solutions
A vision focused on application delivery networking
Affinity to applications:
Needs to be application-fluent (that is, they need to "speak the language")
Support organizations need to "talk applications"
*What Should Enterprises Do About This?
Enterprises must start to move beyond refreshing their load-balancing footprint. The features of advanced ADCs are so compelling for those that make an effort to shift their thinking and organizational boundaries that continuing efforts on SLBs is wasting time and resources. In most cases, the incremental investment in advanced ADC platforms is easily compensated by reduced requirements for servers and bandwidth and the clear improvements in end-user experience and productivity. In addition, enterprises should:
Use the approach documented in "Five Dimensions of Network Design to Improve Performance and Save Money" to understand user demographics and productivity tools and applications. Also, part of this requirements phase should entail gaining an understanding of any shifts in application architectures and strategies. This approach provides the networking team with much greater insight into broader IT requirements.
Understand what they already have in their installed base. We find, in at least 25% of our interactions, enterprises have already purchased and installed an advanced ADC platform, but are not using it to its potential. In some cases, they already have the software installed, so two to three days of training and some internal discussions can lead to massive improvements.
Start building application delivery expertise. This skill set will be one that bridges the gaps between networking, applications, security and possibly the server. Organizations can use this function to help extend the career path and interest for high-performance individuals from groups like application performance monitoring or networking operations. Networking staff aspiring to this role must have strong application and personal communication skills to achieve the correct balance. Some organizations will find they have the genesis of these skills scattered across multiple groups. Building a cohesive home will provide immediate benefits, because the organization's barriers will be quickly eliminated.
Start thinking about ADCs as strategic platforms, and move beyond tactical deployments of SLBs. Once organizations think about application delivery as a basic infrastructure asset, the use of these tools and services (and associated benefits) will be more readily achieved.
Note: We have defined a category of advanced ADCs to distinguish their capabilities from basic, more-static function load balancers. These advanced ADCs operate on a per-transaction basis and achieve application fluency. These devices become actively involved in the delivery of the application and provide sophisticated capabilities, including:
Application layer proxy, which is often bidirectional
As we discussed in my previous post, transparency or more
control is need of the hour with regards to security on the cloud.Let examine how this is done by the popular
cloud providers and understand the method and the technologies. We need to
secure the infrastructure, network, endpoints, applications, processes, data,
and information and overall have a governance to mitigate the risk and meet the
compliance. Let us take the infrastructure to begin with.
The key areas for a security team to design for with regards
to infrastructure security are
logs on all resources – VMs and hypervisors
Let us start looking at the public cloud implementations to
understand how they are managing these aspects.
Almost all the vendors – IBM, Amazon,
provide a means to do SSH with keys to the Guest OS. The protocol runs over SSL
and is authenticated with a certificate and private key which could be
generated by the customer.
SmartCloud is designed with enterprise security as a top priority. Access
to the infrastructure self-service portal and application programming interface
(API) is restricted to users with an IBM Web Identity. The infrastructure
complies with IBM security policies, including regular security scans and controlled
administrative actions and operations. Within our delivery centres, customer
data and virtual machines are kept in the data centre where provisioned, and
the physical security is the same as that for IBM’s own internal data centres.With virtual private network (VPN) option,
customers can isolate their servers in the IBM SmartCloud on a virtual local
area network (VLAN) that can act as an extension of their internal network.
This VPN capability can also be used to create security zones in an Internet-facing
configuration to better protect their servers against attacks.
roles across LotusLive and their access authorizations are recorded in a
Separation of Duty matrix.
security-rich infrastructure: Security configuration reviews
and periodic vulnerability scanning of all systems and infrastructure.
enforcement points providing application security: multi-layered
compliance with periodic programs that address all elements of the service
We will see how the infrastructure
security aspects are dealt with for private clouds in my next post. Stay tuned
and keep those comments coming. I’d some of my readers tell me that the blog
entries are not showing up fine on Internet explorer. While I will make the
effort to fix the issue, please use Firefox or any other browser in the
And if you these posts interesting dont forget to rate the post (click on the stars) and if you got an extra minute do put in a comment on what apsects you find interesting or need discussion.
Organizations looking to optimize across the application lifecycle recognize the need for enhanced innovation and speed to market. Yet most IT resources are focused on covering the basics, leaving fewer resources to support business agility. The solution: Platform as a Service (PaaS).
IBM’s PaaS solution, IBM SmartCloud Application Services, or SCAS, allows clients to differentiate themselves with built-in flexible services that allow them build and customize cloud solutions their way – leading to a competitive advantage. Companies are using enterprise-class IBM Application Services to measure and respond to market demands, capture new markets, and reduce application delivery and management costs.
What are the benefits of a PaaS solution?
First, with IBM Collaborative Lifecycle Management Service, included within SCAS, development teams can establish shared team development environments in minutes – before it used to take weeks. Within hours they can quickly define their development team and begin working collaboratively to respond to business needs.
Another significant benefit of a PaaS approach is the time it takes to get an application deployed and to market. Application deployment can take weeks on a traditional environment but with IBM SmartCloud Application Services, applications can be deployed to the cloud in minutes.
SCAS also allows clients to respond rapidly to changing market conditions by deploying or modifying cloud-centric (“born on the cloud”) or cloud-enabled (legacy applications) quickly and easily. In fact, developers can move from the dev/test environment directly into production with SCAS, taking advantage of proven repeatable patterns contained within the SmartCloud Application Workload Service, thus eliminating human error. These repeatable patterns allow clients to eradicate errors by avoiding manual processes – this drives consistent results, increases productivity, and reduces risk.
IBM SmartCloud Application Services are compatible with the newly announced IBM PureSystems family. For example, through SmartCloud Application Services clients can rapidly design, develop, and test their dynamic applications on IBM's public cloud and deploy those same application patterns on a private cloud built with PureApplication Systems, or vice versa.
Want to try IBM’s PaaS . . . for free*? IBM SmartCloud Application Services is now in pilot and accepting new client who want to get ready to accelerate their cloud initiatives. Clients won’t pay for SCAS services during the pilot, but will only be charged for the underlying *SmartCloud Enterprise infrastructure used by the services (that’s because SCAS runs on top of IBM’s Infrastructure as a Service offering, SmartCloud Enterprise, or SCE). Existing SCE customers can get up and running on the pilot quickly and start realizing the benefits of PaaS right away.
To be considered for the program, new or existing SCE customers should IBM SmartCloud Application Services web site and click the button on the right titled, “Get a jump on the competition with the SmartCloud Application Services pilot program.”
Who is using IBM SmartCloud Application Services? CLD Partners, a leading provider of IT consulting services with a particular focus on cloud computing, began using SCAS during the beta which launched in 2011 and has now transitioned into the pilot program.
“We share IBM’s vision for how enterprise customers can achieve huge productivity gains by embracing cloud technologies. SCAS allowed us to utilize world class software in a managed environment that greatly reduced the complexity of the deployment while also providing for future scalability that our customers only pay for when they need it,” said Steve Clune, Founder and CEO of CLD Partners. “Ultimately, traditional infrastructure planning and configuration that would have required weeks was literally reduced to hours. And future flexibility as infrastructure needs change is virtually limitless.”
Who would be interested in the SmartCloud Application Services pilot program? IT Operations, Independent Software Vendors (ISVs), Line of Business, and Application Developers would benefit from the SCAS pilot program. And it doesn’t matter the company size, enterprise or mid-market; all types of businesses can realize value from getting their applications to market faster.
I did discuss the - The Next Big thing – Cloud enabled
business model Innovation in my previous post. But you may be asking where do I
start.That’s where I guess Cloud
Adoption Patterns work that IBM has pioneered is going to help. This is some
great analysis - Cloud Adoption
Patterns that IBM have done based on thousands of cloud engagements that we
have done so far. This analysis is a good abstraction of the ways organizations
are consuming cloud -- a good starting /entry point discussions on cloud.
The four most common entry points to cloud solutions are discussed in the
picture above. I love these videos on youtube - Cloud Adoption
Patterns that tells you the essence of these patterns in less than 2 minutes.
Data Center – to achieve better return on investment and manage
complexity by extending virtualization well beyond just hardware consolidation.
Solutions on Cloud – to access enterprise-level capabilities through a
provider’s applications running on a cloud infrastructure; to improve
innovation and flexibility while minimizing risk and capital expense.
Service Provider – to innovate with new business models by building,
extending, enabling and marketing cloud services.
For each of these patterns of cloud adoption, we have defined a set of
proven projects that it supports with software, services and solutions to help
businesses streamline the implementation of their chosen cloud capabilities.
While the Cloud
Enabled Data Center pattern is the case for most of the private cloud
implementation. Most customers start with providing infrastructure as a service
on the cloud. This pattern also discusses how we can share infrastructure
across multiple projects and drive benefits.This also discusses a lot of automation in the operation and business
process that’s possible to have a responsive IT department that can help the
business to be agile.
The next level of gain or reuse would be run your workloads on a shared
stack of middleware.Platform
as a Service Pattern is an integrated stack of middleware that is optimized
to execute and manage different workloads, for example, batch, business process
management and analytics. This middleware stack standardizes and automates a
common set of topologies and workloads, providing businesses with elasticity,
efficiency and automated workload management. A cloud platform dynamically
adjusts workload and infrastructure characteristics to meet business priorities
and service level agreements. All the layers below understanding what workloads
are running on top of it and optimizing self is going to help run these
workloads more efficiently and at a lower cost.The Cloud Platform Services adoption pattern can improve developer
productivity by eliminating the need to work at the image level so that
developers can instead concentrate on application development.
solutions pattern maps to the SAAS model where you leverage cloud toinnovate with speed and efficiency to drive
sales and profitability. In these we
look at creating and consuming business solutions on the cloud. Some of the key
offerings in this space are things like business process design, social and
collaboration tools, supply chain and inventory, digital marketing
optimization, B2B integration Services etc. These generic services consumed
from the cloud relieves you of the pain of setting up things from scratch as
well as enable you to scale based on your demands.
Cloud Service Provider (CSP) Pattern is the one that most of the Telcos
adopt when they have to service multiple consumers with a single cloud
solution. We provide tools and technologies to design and deploy highly secure,
multi-tenant cloud services infrastructure that can integrate nicely with
plenty of 3rd party applications.
As we understand it is easy to do the IaaS pattern and more
work to do when we implement SaaS or CSP patterns. But the gain is more when we
do sharing at the software or application level. Depending on where you are in
your current IT Environment, you can pick up and implement any of these
patterns that suit you. The work that we have done to analyse these patterns
and provide a consistent set of technologies and tools to build out these
patterns should make life easy for you. Leverage it –less pain and more to gain.