Chapter 6 - Multiple Entry Points to Deploy and manage Cloud Based Services
Cloud Service Management capabilities are needed to
enable visibility, control and automation of cloud services. IBM provides the
following open standards based integrated capabilities to implement service
management for the cloud.
If you are looking for A la carte software offering/solution for maximum flexibility, you
start with IBM Tivoli Service Automation
Manager. This flexible solution
supports user driven service requests and automated resource deployment. The key capabilities
- Self service User Interface for
Service Requests for improved responsiveness and efficiency
- Workflow support to manage the
process for approval of usage
- Provisioning – Automates provisioning of resources / IT resource
deployment for efficient operations
and to address fluctuating business requirements
with existing hardware to
leverage available resources and previous investment
Delivery Manager (ISDM) is a new offering which is pre-configured
management solution optimized for managing virtual environments and cloud
deployments. Like Tivoli Service
Automation Manager this again is also“software only” offering. In addition to the IBM Tivoli Service
Automation Manager features ISDM includes the additional capabilities
- Pre-integrated solution, delivered
as virtual images for faster installation and time to value.
- Monitoring to provide Visibility
of Performance of Virtual Machines
- Usage and Accounting tracking for
Server ready for High Availability
- Energy Management for tracking and
optimizing operational costs
IBM CloudBurst compared
to Tivoli Service Automation Manager and ISDM not only has the software
solution optimized for cloud but also ships the integrated hardware. In
addition to what was provided by its sibling offerings, IBM Cloudburst provides
the following capabilities.
- Self-contained solution (managed from
and to environment) to accelerate cloud deployments
- Pre-integrated solution bundled with
HW, SW, storage, network and QuickStart services for fastest time to
Thus the three offerings are designed for specific
purposes and selecting the right solution is based on the requirement. You can pick
from the following list and depending on what all you need, it is easy to
select the solution that meets those requirements.
Automation and Provisioning
- Usage and
Storage, Network Hardware
Quite often people are interested to know about IBM WebSphere
CloudBurst and how it is different from the three discussed above. While IBM
CloudBurst and WebSphere CloudBurst are both appliances that accelerate
time-to-value and reduce costs they are designed for two distinct purposes.
CloudBurst is a general-purpose cloud solution. It enables users to
virtualize, deploy, manage, and monitor highly heterogeneous workloads in
their private cloud. IBM CloudBurst is a pre-packaged cloud with
integrated blades, storage, network switches, and software management
- IBM WebSphere
CloudBurst is purpose-built to enable users to create, deploy, and manage
private clouds created from IBM Hypervisor Edition images and patterns. IBM
WebSphere CloudBurst delivers specialized WebSphere knowledge in the form
of pre-configured, optimized WebSphere patterns and images. WebSphere
CloudBurst is a cloud management device: 1U appliance that manages a private or on-premise
cloud. It requires supporting infrastructure (hypervisors, storage, and
networking) and virtual images.
Their integration augments the value of each offering
with IBM CloudBurst enabling end-to-end service request governance for
WebSphere CloudBurst provisioning and users still able to leverage a single portal
for cloud service requests for rapid and
optimized provisioning of virtualized WebSphere systems
Chapter 7 - IBM Tivoli Service Automation Manger – Architecture Overview
Each of the integrated capabilities required to
implement service management for the cloud is provided by IBM
Service Automation Manager (referred as TSAM in this chapter). TSAM supports the cloud through all the
phases of the entire service lifecycle. The steps include
For supporting these phases it provides the following
provisioning and scheduling
On-boarding through automation of
& Complete Lifecycle service
Each of these capabilities are delivered by discrete
components within TSAM
provides a Web 2.0 Interface which provides the service offering /
catalog. (external UI)
service request management is taken care by the Tivoli
Service Request Manager
Process Automation Engine (TPAE) provides the workflow automation engine
service provisioning and fulfillment TSAM uses the Tivoli
Provisioning Manager (TPM)
A quick view of the architecture will help you
understand that how these capabilities are provided by seamlessly by multiple
components underneath TSAM.
Figure 1 Tivoli Service Automation
Manager - Architecture Overview
Below are the key components and responsibilities
with end user
to Service Catalog
parameters for service requests
Tivoli Service Request Manager
end-user services through offerings
and notifications on business level
reservation of resources
Tivoli Service Automation Manager (Service Design)
by management plans
governance including error handling by admin
plan fulfillment by executing TPM workflows/LDOs
Even though I would like to go into details on each component as part of this post, I'm not going to do so because as discussed in the initial post, the objective of this blog is to
provide the reader with the pointers to the content they need and not to repeat the same already
available elsewhere. So you can read more about the TSAM
Architecture on the TSAM
wiki on developerworks.
I’m including the list of software bundles for TSAM 7.2.1 to
get a better understanding of the components involved.
Service Automation Manager
Automation Manager (Base)
Service Request Manager®
Tivoli Service Request Manager
7.2.0 with Fix Pack 1 (126.96.36.199)
Tivoli Provisioning Manager
Package includes the base services and middleware,
Tivoli Monitoring (optional)
IBM Tivoli Monitoring (optional)
6.2.1 or 6.2.2
Base services (Maximo®)
(included with Tivoli Provisioning Manager)
Directory Server (LDAP)
WebSphere Application Server Network Deployment
188.8.131.52, 184.108.40.206 on SUSE Linux® 11
IBM HTTP Server
220.127.116.11, 18.104.22.168 on SUSE Linux® 11
Again, the TSAM
infocenter provides more details on each of the typical hardware and
software requirements and the related topics.
Chapter 8 – Cloud Service Strategy
As discussed in Chapter 5, IBM Integrated Service
Management provides the software, systems, best practices and expertise
needed to manage infrastructure, people and processes—across the entire service
chain—in the data center, across design and delivery, and tailored for specific
industry requirements. The Service Management Goals are the following
- The ability to see everything that’s going on
across the infrastructure.
- The ability to keep the infrastructure in its
desired state by enforcing policies.
- The ability to manage huge and growing
infrastructures while controlling cost and quality.
These principles and goals are the same for Cloud
Service Management as well. End to End Service Management includes the
Cloud Maturity and Readiness
Cloud Service Strategy is mainly
about deciding what services do we want to deliver and how do we ensure
competitiveness of providing the same through cloud. Today’s clients are
seeking to utilize their assets to enable business innovation. The service
strategy is all about choosing from across multiple compute / deployment
models. We needed to access current IT infrastructure
and need to identify and evaluate the set of capabilities for their readiness
to move to cloud.
Selecting between the Cloud Deployment Models
For mission critical workloads that
drive business innovation a private cloud is preferred. For secondary workloads
and supporting business functions a public cloud is suitable. While public
cloud delivers select set of standardized business process, application and/or
infrastructure services on a flexible price per use basis focused on utility,
the private cloud drives efficiency, standardization and best practices while
retaining greater customization and control with focus on innovation.
When doing Service Strategy, you
need to consider the expertise across industries and standards. At this Service
Strategy phase, we normally consider reusing/leveraging solutions based on
industry best practices including ITIL,
Calculating the ROI
Cloud Computing ROI is the important consideration/step
during the Service Strategy phase. This includes you verifying the following
fundamental aspects related to making a service available on the cloud.
Level Agreements (SLA)
/ Compliance Requirements
There are several ROI frameworks and methods available
that allows you validate the approach/strategy against these three fundamental
aspects. Most of the service companies
would have their own frameworks which are typically Intellectual Capital of
their service teams.
Choosing the right
Delivery Models and Workloads
Based on the Enterprise Architecture approach, we need
to choose from the many available options of delivery models and work load.
This includes the services and consulting engagement to obtain clarity on
business drivers (business vision, strategy, timeline, business model, and
business operating model) and how they can leverage technology and value
enablers from cloud computing. Then in
this cycle you also need to identify the right set of workloads to move to
cloud that fetches maximum benefits from cloud computing. The flexibility that the business operating model
gets to innovate on the business model is another key consideration. This could be iterative effort of identifying
candidates and then slowly moving them to production.
One of the biggest challenges to utilize cloud
computing in your organization is where to start and how to focus your efforts.
IBM provides a Cloud
Adoption Advisor to get started on
the topic. The opengroup has also published a whitepaper on building
return on investment on cloud computing.
Key Benefits from Service Strategy
Innovation - Dramatically improve business value and IT’s effect on
time-to-market by enabling the business workloads to rapidly and
accurately be deployed on multiple platforms when and where they are
operational expenses – Gain productivity increases in IT labor costs
through automation of rapid provisioning.
Chapter 9 – Cloud Service Design
Once you have installed and setup your management platform,
we are ready to start with designing and delivering the cloud services using
SOA & Cloud
We use the same principles of Service Oriented
Modeling and Architecture (SOMA) that links business intent with its
realization through IT for Cloud Services modeling as well.
In SOA, we use the business process models to
understand a series of sequentially organized business activities, events that
roles that perform them,
inputs, outputs, control points, etc…
discussed in the Service Strategy section, we look to design the Cloud Services
which are better aligned to business requirements
As in SOA, for service identification and design one could
take any of the following approach.
in the Middle
In a top-down approach development generally usually starts
with a high-level business and structural modeling of the service. Then you
also define the management processes that are required service to be in
operation. The top-down approach is further characterized in that no or only
few automation or fulfillment assets exist when starting with the solution
design. Design and implementation of those assets, including their interface
and granularity, will be driven primarily from the high-level automation model.
The advantage of the top-down approach is a clear design of the service to be automated,
including the structural and operational model.
The bottom-up approach is usually characterized by a large
number of automation assets that already exist. This may be in the form of many
scripts or workflows already exists. In bottom-up approach, we take these low
level assets and abstracting them as a cloud service.
Practically we might go with a combination of both
approaches mentioned above as the meet-in-the-middle approach.
We model the service so we could learn, capture, and
abstract details about “things,” their structures, relationships between them
and, often, their behaviors (collaborations, states). All the factors that we consider during
modeling a service in SOA are very much applicable for a cloud service too.
These include but not limited to
Portfolio ( in the case of cloud often referred as service catalog)
ABCs of Service Design for Clouds by David Linthicum is good article which
discusses where SOA meets Cloud.
Service Management & Cloud
Now lets discuss the same from the Service Management / ITIL perspective. Cloud services have a lifecycle that maps to this service
The Service Design phase includes the service definition,
creation of the service and registering the same into a catalog. We will look at how these can be done using Tivoli
Service Automation Manager in the next Chapter.
Service Design is a critical step that delivers the
service delivery with agreed and well understood qualities
expenses follow level of value creation
investments follow business demand and revenue generation.
Computing is a term that is often bandied about the web these days and
often attributed to different things that -- on the surface -- don't
seem to have that much in common. So just what is Cloud Computing? I've
heard it called a service, a platform, and even an operating system.
Some even link it to such concepts as grid computing -- which is a way
of taking many different computers and linking them together to form one
very big computer.
basic definition of cloud computing is the use of the Internet for the
tasks you perform on your computer. The "cloud" represents the Internet.
Cloud Computing is a Service
The simplest thing that a computer does is allow us to store and
retrieve information. We can store our family photographs, our favorite
songs, or even save movies on it. This is also the most basic service
offered by cloud computing.
a great example of cloud computing as a service. While Flickr started
with an emphasis on sharing photos and images, it has emerged as a great
place to store those images. In many ways, it is superior to storing
the images on your computer.
Flickr allows you to easily access your images no matter where you are
or what type of device you are using. While you might upload the photos
of your vacation to Greece from your home computer, you can easily
access them from your laptop while on the road or even from youriPhone while sitting in your local coffee house.
Second, Flickr lets you share the images. There's no need to burn them to a compact disc or save them on a flash drive. You can just send someone your Flickr address.
Flickr provides data security. If you keep your photos on your local
computer, what happens if your hard drive crashes? You'd better hope you
backed them up to a CD or a flash drive! By uploading the images to
Flickr, you are providing yourself with data security by creating a
backup on the web. And while it is always best to keep a local copy --
either on your computer, a compact disc or a flash drive -- the truth is
that you are far more likely to lose the images you store locally than
Flickr is of losing your images.
This is also where grid computing comes
into play. Beyond just being used as a place to store and share
information, cloud computing can be used to manipulate information. For
example, instead of using a local database, businesses could rent CPU
time on a web-based database.
downside? It is not all clear skies and violin music. The major
drawback to using cloud computing as a service is that it requires an
Internet connection. So, while there are many benefits, you'll lose them
off if you are cut off from the Web.
Cloud Computing is a Platform
The web is the operating system of the future. While
not exactly true -- we'll always need a local operating system -- this
popular saying really means that the web is the next great platform.
a platform? It is the basic structure on which applications stand. In
other words, it is what runs our apps. Windows is a platform. The Mac OS
is a platform. But a platform doesn't have to be an operating system.
Java is a platform even though it is not an operating system.
Through cloud computing, the web is becoming a platform. With trends such as Office 2.0,
we are seeing more and more applications that were once the province of
desktop computers being converted into web applications. Word
processors like Buzzword and office suites likeGoogle Docs are
slowly becoming as functional as their desktop counterparts and could
easily replace software such as Microsoft Office in many homes or small
But cloud computing transcends Office 2.0 to deliver applications of all shapes and sizes fromweb mashups to Facebook applications to web-based massively multiplayer online role-playing games.
With new technologies that help web applications store some information
locally -- which allows an online word processor to be used offline as
well -- and a new browser called Chrome to push the envelope, Google is a major player in turning cloud computing into a platform.
Cloud Computing and Interoperability
A major barrier to cloud computing is the interoperability of
applications. While it is possible to insert an Adobe Acrobat file into a
Microsoft Word document, things get a little bit stickier when we talk
about web-based applications.
is where some of the most attractive elements to cloud computing --
storing the information on the web and allowing the web to do most of
the 'computing' -- becomes a barrier to getting things done. While we
might one day be able to insert our Google Docs word processor document
into our Google Docs spreadsheet, things are a little stickier when it
comes to inserting a Buzzword document into our Google Docs spreadsheet.
for a moment that Google probably doesn't want you to have the ability
to insert a competitor's document into their spreadsheet, this creates a
ton of data security issues. So not only would we need a standard for
web 'documents' to become web 'objects' capable of being generically
inserted into any other web document, we'll also need a system to
maintain a certain level of security when it comes to this type of data
Possible? Certainly, but it isn't anything that will happen overnight.
What is Cloud Computing?
brings us back to the initial question. What is cloud computing? It is
the process of taking the services and tasks performed by our computers
and bringing them to the web.
What does this mean to us?
With the "cloud" doing most of the work, this frees us up to access the
"cloud" however we choose. It could be a super-charged desktop PC
designed for high-end gaming, or a "thin client" laptop running the
Linux operating system with an 8 gig flash drive instead of a
conventional hard drive, or even an iPhone or a Blackberry.
can also get at the same information and perform the same tasks whether
we are at work, at home, or even a friend's house. Not that you would
want to take a break between rounds of Texas Hold'em to do some work for the office -- but the prospect of being able to do it is pretty cool.
Cisco’s apparently going to try to simplify its sales, services and engineering organizations in the next 120 days
By Maureen O'Gara
Faced with a nasty loss of credibility, a string of poor financial
results, shrinking market share in its core business, an unwieldy and
alienating bureaucracy blamed for the top executive exodus it been
experiencing, and a stock price that's plunged into the toilet Cisco,
once an economic bellwether, is promising to do more than simply kill
off its once-popular Flip video camcorder business and lay 550 people
off, an admission that its foray into the consumer segment had largely
It said in a press release issued Thursday morning that it's going to
a "streamlined operating model" focused on five areas, not apparently
the literally 30 different directions it's been going in although it did
say, come to think of it, something about "greater focus" so maybe it's
not really cutting back.
These focus areas are, it said, "routing, switching, and services;
collaboration; data center virtualization and cloud; video; and
architectures for business transformation."
Nobody seems to know what that last one is and the Wall Street
Journal criticized Cisco for not being able to explain in plain English
what it's doing and Barron's complained that it needed a Kremlinologist
to decrypt the jargon in the press release.
Anyway Cisco's apparently going to try to simplify its sales,
services and engineering organizations in the next 120 days or by July
31 when its next fiscal year begins. Well, maybe not everything, it
warned, but sales ought to be reorganized by then.
This streamlining seems to mean that:
- Field operations will be organized into three geographic regions
for faster decision making and greater accountability: the Americas,
EMEA and Asia Pacific, Japan and Greater China still under sales chief
- Services will follow key customer segments and delivery models still under its multi-tasking COO Gary Moore;
- Engineering, still reporting to Moore, will now be led by
two-in-a-box Pankaj Patel and Padmasree Warrior and aside from the
company's five focus areas there will be a dedicated Emerging Business
Group under Marthin De Beer focused on "select early-phase businesses"
"with continued focus on integrating the Medianet architecture for video
across the company."
- Lastly, it's going to "refine" - but apparently not dismantle its
hydra-headed, decision-inhibiting Council structure blamed for
frustrating and running off key talent - down to three "that reinforce
consistent and globally aligned customer focus and speed to market
across major areas of the business: Enterprise, Service Provider and
Emerging Countries. These councils will serve to further strengthen the
connection between strategy and execution across functional groups.
Resource allocation and profitability targets will move to the sales and
engineering leadership teams which will have accountability and direct
responsibility for business results."
It's unclear whether any of this means layoffs.
Cisco piped in a quote credited to Moore saying. "Cisco is focused on
making a series of changes throughout the next quarter and as we enter
the new fiscal year that will make it easier to work for and with Cisco,
as we focus our portfolio, simplify operations and manage expenses. Our
five company priorities are for a reason - they are the five drivers of
the future of the network, and they define what our customers know
Cisco is uniquely able to provide for their business success. The new
operating model will enable Cisco to execute on the significant market
opportunities of the network and empower our sales, service and
If you haven’t signed up yet, be sure to check out the October cloud computing for developers virtual event
. Participants in this two-day event will learn how to leverage the power of the cloud to tackle the toughest business and technical challenges! This two-day event will be packed with real-world examples and live demos of techniques and products – and you’ll see it all without leaving your desk. It's going to be exciting to have you all there with us getting smarter learning new technical skills to prepare us all for a smarter planet.
Here's some of what's in plan for the event. Remember that you can ask as many questions as you wish to our team of experts about any of our sessions.
- IBM technical experts will kick off the event on day 1 with a session on the IBM development and test cloud and you'll see the cloud in action in a live demo. Our experts will discuss use cases and scenarios that will help you as you develop and test in the cloud.
- Next we'll discuss a roadmap on how you and IBM can move your application to pattern-based middleware and why infrastructure-as-a-service alone is not enough to reduce implementation challenges when making the move to software-as-a-service.
- Then you will learn how IBM's new Cast Iron Cloud Integration Platform has helped hundreds of customers just like you connect their cloud and on-premise applications in just days with its 'configuration, not coding' approach. You will see an engaging live ERP to cloud CRM demo.
- The final day 1 session will demonstrate how to efficiently package middleware and/or applications so that they can be easily deployed into dynamic "cloudified" IT infrastructure. Techniques addressed in this session will include Anatomy of an Open Virtual Appliance, OVA repository and lifecycle, single and multi-image OVAs, best practices and examples of OVF.
That's not all folks, remember we have a full set of sessions on the 2nd day to. Remember, you'll have to register separately for day 2.
- We'll start the day off showing you how solutions such as eXtreme Scale can scale the database layer. And you'll learn how eXtreme Scale and XC10 help solution-wide HTTP session management, and the WebSphere Application Server dynamic cache service for page fragments.
- Ever wondered why iSeries may be an ideal platform for cloud computing? The next session will show you how iSeries has been architected for applications that can be delivered in a hosted or SaaS environment, drilling down into the capabilities that make IBM iSeries well suited for SaaS.
- I'm sure you will not want to leave before you hear best practices for designing databases for multitenancy and resiliency which is the topic of the next session. Learn about use cases of AWS and DB2 instances, database schemas as well as a demonstration of setting up HADR in the cloud.
- We'll wrap up with a final session examining some technical considerations associated with building a secure application in a cloud environment and then discuss how they can be addressed with IBM products including DataPower, TFIM, TSIEM and TSPM.
We are giving you a choice. Choose the 2-day event best suited to you depending on where you are in the world. Both events will have very similar sessions. Register for the event that is best timed for North American (October 12-13) or European (October 26-27) time zones.
Visit the IBM Cloud for developers group
to view the agenda and session descriptions, or register here
We are looking forward to learning with you so join us this month to get a little smarter.
Cloud Security – The top most concern and Opportunity
First of all, wishing all my readers a
very happy and prosperous year 2012 ahead.
Few things happened towards the end
of the year which was significant to me. IBM acquired Q1 Labs to Drive Greater Security Intelligence and created a New Security Division. I also joined this
newly formed IBM Security Systems team last quarter as a solution architect for cloud security. This is a great time to be looking at cloud security. Happy to be on this new role where I can provide solution to customers to handle their cloud security concerns and make it easy for them to adopt cloud and innovate at a faster rate than before.
In my previous
post, we discussed security as the top most concern why customers and
enterprises are not adopting cloud. As
part of year’s posts, I plan to discuss the various security issues and aspects
of cloud computing.
We will explore to understand what are
the unique challenges with Cloud Security and discuss what aspects is important
for each customer
adoption pattern that we have seen.
We will also learn how the IBM Security
Framework can be used to address the various security challenges namely
governance, risk management and compliance
server and endpoint
forward to your comments and inputs in this journey of understanding the
security requirements for cloud and how we can overcome this major challenge to
cloud adoption using the World’s Most Comprehensive Security Portfolio – IBM
Security Systems. I’ll
try and elaborate the IBM Point of View on cloud security and discuss the architectural
model to address the security requirements for cloud. Stay tuned and keep those comments and inputs coming.
Cloud Service Provider Platform (CSP2)
Till now we have seen through the earlier posts – what are
the essentials to go about creating a cloud environment – that consists of the management
platform as well as the managed environment. We have seen the critical
roles and organizations involved as well as the importance of Cloud
Service Strategy and Cloud
Service Design. We also saw the criticality of the need for a Cloud
Computing Reference Architecture (CCRA) to tie all the solution elements
together. We also saw how IBM
Service Delivery Manager (ISDM) which is an enterprise cloud solution based
Service Automation Manager (TSAM) can be deployed as a set of virtual
images that automate IT service deployment and provide resource monitoring,
cost management, and provisioning of services in the cloud.
Cloud Service Provider Platform (CSP2) is a carrier grade cloud offering
that contains enhancements over the base ISDM solution to provide a
multi-tenancy environment that allows both internal and external users to exist
on the same cloud and management platforms. IBM's new CSP2 platform provides
cloud services such as desktop management to influence the cloud based business
strategy of communications service providers.
Cloud Service Provider Platform is specifically tailored to the needs of CSPs
and is designed to help them successfully:
- Create cloud services that
harness the strengths of a diverse partner ecosystem and rapidly enable
applications and solutions to extend their market reach.
- Manage cloud services quickly
and easily with an open, carrier-grade, secure, scalable, automated and
integrated service management solution.
- Monetize cloud services by
leveraging business intelligence and analytics to achieve differentiation,
maximize revenue and enhance the customer experience.
Figure 1 IBM Integrated Service
Management Solution for Cloud Service Providers
IBM Cloud Service Provider Platform is an integrated Service Management for
Cloud Service Providers is built upon around a core Service
Automation and Management component provided by ISDM. Beyond the core, IBM’s Integrated Service
Management for Cloud Service Providers makes available four extensions—network
management, and advanced
monitoring and service level management—that enables a comprehensive
Communications service providers (CSPs) around the world are
looking for smarter ways of doing business. They are being challenged to
transform the way services are created, managed, and delivered. CSP2 neatly
integrates and extends the SPDE (Service Provider Delivery Environment) for
Communication Service Providers to build the ecosystem to become a cloud
service provider. For a cloud based
business strategy - check out the video from Scott on the
value of CSP2 for CSPs.
With the recent exploration of cloud computing technologies, organizations are using cloud service models like infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) along with cloud deployment models (public, private and hybrid) to deploy their applications.
There is a concept in the cloud world that is based on application characteristics: the concept of cloud-enabled and cloud-centric applications. In this blog post, Dan Boulia provides a concise explanation about the concept.
You can say that a cloud-enabled application is an application that was moved to cloud, but it was originally developed for deployment in a traditional data center. Some characteristics of the application had to be changed or customized for the cloud. On the other hand, a cloud-centric application (also known as cloud-native and cloud-ready) is an application that was developed with the cloud principles of multi-tenancy, elastic scaling and easy integration and administration in its design.
When developing an application that will be deployed in the cloud, you must keep the cloud principles in mind. They should be taken into account as part of the application. So we come to the first point: Is it better to work within an existing application or to completely redesign it? There is no exact answer because it depends. You have to evaluate the level of effort (labor, time and cost) to transform the application into cloud-enabled versus the effort to completely redesign it to a cloud-centric application.
The second point is: Will my cloud-enabled application work better than a new cloud-centric application? Here I would say no. It’s rare to find an existing traditional application that was developed with any of the cloud principles in mind. It may be possible to construct the same feel (for the user) as a cloud-centric application, but it will not function the same way internally.
Changing an existing application could be easier since you already have the skills and tools in the organization and you won’t need to learn any new technology. However, while it may be easier to change the application, in the long term it will be harder to maintain. New technologies (social media, mobile, sensors) continue to appear and it is becoming more important to integrate them. Doing this will require additional and continuous effort and may exponentially increase development and supporting costs.
Now comes the third point: What can you use to help expedite the move or redevelopment of an existing application to a cloud-centric model? Many cloud companies have development tools that can help an organization on this path. For instance, IBM has recently announced IBM Bluemix, a development platform to create cloud-centric applications. Shamim Hossain explains the capabilities in more detail in his blog post. Another option is to use IBM PureApplication System to expedite the development.
I discussed some points here that I hope can provide a better understand about an important concept in cloud computing and how to address it. Let me know your thoughts on it! Follow me at Twitter @varga_sergio to talk more about it
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
· Rapidly scalable deployment designed to meet business growth
· Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
· Reduced complexity through ease of use and improve time to value
· Reduced IT labor resources with self-service requesting and highly automated operations
· Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
With the new release of IBM SmartCloud Provisioning this week, you can try and see firsthand the potential of this breakthrough technology to accelerate your journey to cloud.
And if you want a preview of what’s in development, you can join our Open Beta program for access to beta-level code.
The use of business intelligence and analytics to make decisions is on the rise in corporate America. According to a Gartner report, the the BI and analytics market is expected to grow to $20 billion by 2019. We are already at a stage where over 50% of the analysts and users in organizations have access to self-service tools for business intelligence. This is not surprising given recent reports that suggest that the use of such tools help businesses five times more likely to make faster decisions.
But it’s not just faster decision making that is driving the adoption of business intelligence tools. BI and analytics relies heavily on data to arrive at conclusions. Decisions based on data are likely to be a lot more predictable and trustworthy than those that are based off gut-feelings or consumer surveys.
Deploying business intelligence at work is however much more than simply installing a vendor software. BI tools are only as successful as you want them to be. The most successful businesses are those that see BI as a component of a larger process and culture change within the company. This change requires managers to establish processes that drive larger aggregation of data, processing them and actively pursuing insights from this data to arrive at decisions.
Identify your objectives
The first step towards a successful deployment of BI is understanding your business objectives. A cost reduction project, for instance, would require a system where capital and cash outflow data from all your various warehouses and distribution centers are available at a granular level. On the other hand, if your objective is revenue maximization, then your system will not only need data pertaining to the various SKUs in the market along with their sales and distribution numbers, but also similar data of your competitors. In other words, knowing your business objectives will tell you the kind of data you will need. This will help you establish a system that gathers this data. Without a working system, deploying a BI platform is meaningless.
Pick the right tool
The success of a BI deployment depends to a great extent on the software tools that you use. However, the best BI tool in the market may not be the right one for your business. Your evaluation should include various parameters like the cost of the tool, the size of business it is targeted at and the nature of deployment. According to this list of BI software tools, there are over 339 products available in the market today. While tools like Slemma BI and Grow BI are targeted at the price-sensitive customer, others like Rapid Insight focus on businesses that prefer Windows local installation. Pick a tool that meets not only all your feature needs, but also works within your budget and deployment requirements.
Effect a data-driven culture
Business intelligence is one of the best examples for the popular computing phrase, ‘Garbage in, Garbage out’. In other words, incorrect or insufficient input would bring about incorrect or insufficient output. The only way to break this cycle would be to drive a cultural change within the organization that focuses on gathering data at every level and bringing them together for the decision making process. This is not an easy thing to achieve, especially if you are an enterprise business with hundreds or thousands of employees. The lead, however, needs to come from the top and this push for data-driven decision making is extremely crucial to the success of a BI deployment.
Do you make use of business intelligence software at work? Share us your experience in the comments.
Dubuque, Iowa and IBM Combine Analytics, Cloud Computing and Community Engagement to Conserve Water
DUBUQUE, Iowa, - 20 May 2011: The City of Dubuque and IBM (NYSE: IBM) today announced that the IBM analytics and cloud computing technology deployed in 2010 by Dubuque as part of its Smarter Sustainable Dubuque research helped reduce water utilization by 6.6 percent and increased leak detection and response eightfold.
The Smarter Sustainable Dubuque Water Pilot Study empowered 151 Dubuque households with information, analysis, insights and social computing around their water consumption for nine weeks. By providing citizens and city officials with an integrated view of water consumption, the Water Pilot resulted in water conservation, increased leak reporting rate, and encouraged behavior changes.
Water savings were measured by comparing the consumption of the 151 pilot households with another 152 control group households with identical smart meters but without the access to the analysis and insights provided by the Water Pilot Study for the nine-week duration.
The smarter meter system monitored water consumption every 15 minutes and collected and communicated to the IBM Research Cloud. Data was collected from information including weather, demographics, and household characteristics. Using cloud computing, the data was analyzed to trigger notification of potential leaks and anomalies, and helped volunteers understand their consumption in greater detail. Volunteers were only able to view their own consumption habits while city management can see the aggregate data. All participating homes were volunteers and the data being collected was anonymous and contained no confidential information.
Participating households were alerted about potential anomalies and leaks and were able to get a better understanding of their consumption patterns and, compare and contrast it anonymously with others in the community. Pilot study participants accessed their personal water usage information through a website portal and participated in online games and competitions aimed at promoting sustainable behavior enabling them to become fully engaged and informed about their consumption and the impact of the changes they made to it. Participants were able to see their data expressed in dollar savings, gallon savings and carbon reduction.
More specific results include:>
Ecosystem development cloud France will receive IBM business partners on March 5th 2015.
All information is available on the following site: http://www-01.ibm.com/software/fr/channel/KO_BP2015/
The agenda in described here: http://www-01.ibm.com/software/fr/channel/KO_BP2015/agenda.html and the workshops here: http://www-01.ibm.com/software/fr/channel/KO_BP2015/ateliers.html
We will be happy to welcome you for face to face discussions.
Alain Airom (cloud solution architect).
Sweating is an involuntary phenomenon that our body skin performs in order to regulate the body temperature. Technically known as perspiration, it is the release of salt-based fluids from our sweat glands located under the skin.
While sweating is an essential natural process, excessive sweating leads to social problems and sometimes embarrassment when attending a public meeting. Hence a need for the development of an artificial process that could control excessive sweat arose which gave birth to Iontophoresis.
The process of Iontophoresis was brought into active treatment in the early 1940s. Since then, the process itself has evolved several fold time. Iontophoresis is the advanced stage for patients who are suffering from excessive sweating, scientifically known as Hyperhidrosis. With an impressive success rate, typically ranging from 65%-98.5% for eliminating excessive sweats from the targeted parts of the body, Iontophoresis is considered safe and reliable.
Working Of Iontophoresis
Iontophoresis uses purified water as a medium to conduct a mild electrical current to the sweat glands present in the skin cells. Iontophoresis is the process that works by passing charged particles all the way to the sweat glands, thereby making it dysfunctional rather than excessive sweat.
Areas That Can Be Treated With Iontophoresis
Several body parts are proven to be treated with desired results to eliminate excessive sweat by using Iontophoresis. Typically, Iontophoresis enjoys a success rate of 98.5% for feet area and around 65-75% for armpit and underarms area. Some of the other body parts that can undergo Iontophoresis include facial area, forehead including scalp area.
How Is Iontophoresis Performed?
Major factors on which the success of Iontophoresis depend includes
- Area of sweating
- Type of machine used
- Degree of sweating
- Previous results, if applicable.
The following procedure is usually followed while undergoing Iontophoresis
- The area to be treated is immersed directly in water or a mild solution containing ions and chemicals. Generally, tap water is used because of its high level of mineral content.
- A mild electrical current in the range of millivolts is applied to the solution for a duration of about 15-30 minutes.
- A similar procedure is repeated for every session that the patient is advised to undergo.
Utmost care is taken so that Iontophoresis is conducted in a zone where the possibility to sweat is the minimum.
Side Effects Of Iontophoresis
As mentioned earlier, Iontophoresis mainly because of its nature of the treatment is one of the safest treatments. But there are exceptions to all good things in this world. Some of the side effects of Iontophoresis include:
- Open cuts, if any, in the area of treatment, may cause discomfort while applying an electric current. Necessary precaution should be taken
- In some of the cases, redness is formed along the water line which easily fades away in a couple of days.
- Also, some patients may develop dryness of skin leading to skin irritation.
Patients Who Should Avoid Iontophoresis
Though Iontophoresis is a safe, reliable treatment and this has been repeatedly mentioned throughout, a certain category of patients must avoid undergoing Iontophoresis. They include Pregnant women, Patients fitted with a pacemaker, Patients with metal implants and Patients suffering from epilepsy.
FDA-Approved Devices For Iontophoresis
To the much of the patient’s convenience, FDA approved devices are in the market that can be used for Iontophoresis in the home.
- PSP-1000 Iontophoresis Device Package
A powerful package, this device has the highest success rate. More than 20000 units have been sold till date and this proves its popularity. Enjoying an FDA-approved device status, the device is the most expensive public use device for Iontophoresis.
- Iontophoresis Unit MD-1a
Though the makeover of this device is dull, the unit is powerful enough to provide desired results.The device is expensive yet one of the popular FDA-approved devices in practice.
Alternative To Iontophoresis
What if Iontophoresis fails to treat your concern? There are other ways too to reduce your excessive sweat. Such ways use the additional substance to enhance/strengthen the process of Iontophoresis. Some of these enhancements include:
- Adding Sodium Bicarbonate to the Iontophoresis mixture.
- Adding additional chemicals to tap water to enhance conductivity.
- Combining additional tablets for controlling sweat apart from undergoing Iontophoresis sessions.
A very holistic view about the process of Iontophoresis is detailed in the above passages. Consider it only as a general guide and make sure to have an opinion from a specialist before undergoing Iontophoresis.