With December's release of IBM SmartCloud Monitoring, Tivoli's venerable IBM Tivoli Monitoring product family, proven in data centers at the world's largest corporations, begins to adopt a "Cloud" posture. Sure, "Cloud" is a term bereft of a clear operational definition that we can apply at any given moment, and customers, analysts and vendors tend to bandy it about pretty freely these days. However, if we don't get too hung up what Cloud is or isn't, we can probably agree that it represents a migration from our traditional server-delivered infrastructure to one comprised of pooled computing resources shared by virtual workloads. Whether or not our customers are calling their virtualized environments "private clouds" today, and whether or not they've got a "cloud budget" that they're using for such initiatives, the fact that they're moving along the cloud maturity continuum at some pace seems inescapable, given IDC's assertion that we crossed the magical "50%" boundary last year, when half of all corporate workloads were running on virtual machines instead of physical ones.
If we're beginning to think in terms of clouds of pooled computing resources, it makes sense that we begin to deliver management solutions in the same way, right? If the server administrators, storage administrators and network administrators now report to a cloud administrator, we should begin to package solutions for those cloud administrators, combining multiple pieces of management technology into a single part number that customers can purchase and deploy. That's exactly what we've done with SmartCloud Monitoring. The discrete monitoring agents that are at the heart of IBM Tivoli Monitoring; OS monitors, application monitors, storage, etc., are as important as they ever were. Even though we're pooling those resources across virtual machines, we still have to monitor things like processes, CPU activity, IO throughput, and so on. We just need to add a layer on top of all that granular detail, so the cloud administrator can see, at a glance, what's healthy or unhealthy about his cloud environment, before drilling down into the nuts and bolts.
SmartCloud monitoring combines the VMware virtualization management features in ITM for Virtual Environments with virtual machine instance monitoring from ITM's operating system agents, to monitor a cloud infrastructure and the workloads running on it.
Our roadmap looks like an analyst's cloud maturity ladder, adding features such as automated provisioning, usage and accounting integration, and more detailed network monitoring, so our solution will "mature" along with the market, and customers' needs. See if the challenges along this ladder look like things that you or your customer have faced on their cloud journey, or are grappling with now. It's important to note that Tivoli has solutions that can be applied to each step, and for each problem. What SmartCloud promises is a way to bring those solutions together into more consumable bundles, tightly integrated together, to make cloud management simple to purchase and simple to deploy.
SmartCloud Monitoring delivers key capabilities for optimizing and maintaining a private cloud, including:
- Health dashboards, to provide an instant, consolidated glimpse into cloud health
- Topology views of the key interrelated components of the cloud
- Reports on the health trends of cloud components and workloads, powered by Cognos
- What-If capacity planning scenariosPolicy-Based optimization to put workloads where they’ll perform best, not just where they’ll fit
- Performance Analytics for right-sizing of virtual machines
- Integration with industry-leading Tivoli service management portfolio
I'm a big fan of standardization. I'm a big fan of using non-persistent images as well. They just make my life so much easier.
The only issue I see with them is the need anyway to provide to the end user some configuration and customization possibilities.
It could be something trivial like having your own screen saver, or a special keyboard and language configuration or it could be
something like connecting the softwares inside the image to some specific devices, disks or additional, external software.
I even do not want to think about having a master image for each of the possible situation. This would simply make my image catalog
so uselessly big that it would shortly become unmaneageable. I would even not mention all the possible issues I could have when I
need to upgrade a master image, I just need to do it for all the customized masater images derived from that one.
I will loose all the advantages of dealing with a cloud of non-persistent images. I'm only sliding the issue from the virtual machine
instances to the master images themselves.
A possible solution would be to have the user reconfigure his VM everytime he starts it: unbereable! ...especially if you think about
complex software stacks.
I found interesting the solution included into IBM SmartCloud Provisioning. What you can do with that is to allow the end user to specify
a set of configuration parameters at image deployment time so that the image will be automatically configured accordingly at boot time.
The idea under the cover is pretty easy: the image builder inserts in the master image a script that is run at system boot.
The script is supposed to be able to parse the information passed by the end user at VM deployment time and takes the needed action like
reconfiguring the operating system or a specific software.
All information inserted by the end user in the web user interface are saved on the compute node and then injected back into the deployed instance
If you are worried about the fact the end users might be reluctant to type in information in a specific format (a possibility is to let him
deal with free text, but then you'll get mad in parsing it) and that the process could be error prone, consider that if you use Image
Construction and Composition Tool (an optionally installable component inside IBM SmartCloud
Provisioning), the web UI gets automatically modified to show the end user the parameters you may want him to put in.
Of course if you are a lazy end user and you do not want to type in information or remember them (especially if you do it frequently),
you can type your input parameters in a file and use the command line to deploy the image passing the file as one of the deployment parameters.
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
- Rapidly scalable deployment designed to meet business growth
- Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
- Reduced complexity through ease of use and improve time to value
- Reduced IT labor resources with self-service requesting and highly automated operations
- Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
With the new release of IBM SmartCloud Provisioning this week, you can try and see firsthand the potential of this breakthrough technology to accelerate your journey to cloud. And if you want a preview of what’s in development, you can join our Open Beta program for access to beta-level code.
Starting from December 9th 2011 IBM SmartCloud Provisioning 1.2 is available for download.
The key features introduced in this release are:
Full product install through an interactive tool:
IBM SmartCloud Provisioning can now be installed using a graphical
wizard. Two flavours of the installer minimal and custom. The custom
installation allows to specify the number of instances of HBASE and
Zookeeper to be deployed. Moreover it allows to automatically configure
ESXi servers as compute node. The creation of the management virtual
image on VMware is automated.
Support for multiple networks:
you can now deploy images with more than one NIC. Different users can deploy images in segregated networks.
Integration of the Image Construction and Composition Tool:
The Image Construction and Composition Tool
helps building and customizing master images. It is designed to
facilitate a separation of concern and tasks, where experts build software bundles for reuse by others. This design approach greatly simplifies the complexity of virtual image creation and reduces errors.
Support of Open Virualization Format (OVF):
OVF images that can be created or modified by the Image Construction and Composition Tool
OVF metadata can be displayed and modified in the Self Service UI
Integration of the Virtual Image Library component:
The Virtual Image Library helps managing the life cycle of virtual images:
-Search images for specific software products
-Compare two images and determine the differences in files and products
-Find similar images
-Track image versions and provenance
The cloud administrator can use a brand new UI to perform tasks like
registering images, registering networks, managing quotas, assigning
roles, managing elastic IPs
The IBM® Image Construction and Composition Tool is a web application that simplifies and automates virtual image creation for public and private cloud environments, shielding the differences in cloud implementations from its users.
This white paper provides Software Specialists and other product experts with helpful tips and techniques to plan, design, and create software bundles in the Image Construction and CompositionTool.
The DBMS placement in Cloud Solutions based on Tivoli Provisioning Manager (TPM) / Service Automation Manager (TSAM) / Service Delivery Manager (ISDM), plays a significant role into overall product function, performance, and how this relates to the evolving workload.
A typical setup approach is to install TPM/TSAM with the DBMS co-located.
This is the default setup option in the TSAM installation and TSAM-VM-image which is included in the ISDM solution.
Over time, based on increasing workload, capacity planning, or production requirements, it may be desirable to move the local database to a remote node, with the goal to achieve greater scale and to exploit additional resources.
A white paper
is available for this purpose in the Integrated Service Management Library.
The referenced paper, has been recently updated to version 2.4, and describes how to relocate the DBMS in existing TPM / TSAM / ISDM solutions.
A very interesting Cloud Computing case study of the Capgemini Infrastructure as a Service delivery platform project has been recently published on the Web:
The case study shows how one of the world’s leading infrastructure outsourcing providers has seen the business opportunity of offering to its clients a cloud-based solution that combines the benefits of a high-value infrastructure service provider with the cost advantages of Cloud computing. Capgemini focused the new cloud based services on delivering to their clients Infrastructure as a Service capabilities with much higher flexibility and substantial cost-efficiency.
In partnership with IBM, Capgemini built a fully integrated cloud delivery platform for clients in the UK and USA leveraging the Tivoli Service Delivery Manager solution that includes the IBM Tivoli Service Automation Manager, Tivoli Monitoring and Tivoli Usage Accounting Manager products. On top of the IBM hardware BladeCenter HS22V and XIV Storage System technologies.
The key aspects of the solution built by Capgemini has been:
- Implementation of a resilient and scalable global infrastructure with capability of managing resource pools in different regions and with a modular design for quick scale out
- Single solution that enables to manage a wide range of platforms and architectures that does not tie to any specific hardware technology or vendor. Ability to choose the right hypervisor and guest OS platforms for the right workload
- Multi-Customer shared infrastructure providing secure network separation between customer environments
- Automation of network management and configuration that enables to support multiple network domains per customer and linkage to the customers private networks
- Extensible service catalog to fit the needs of the Capgemini customers
- Ability to quickly on-board existing Capgemini customer workloads.
IBM® Tivoli® Service Automation
Manager (TSAM) has delivered yet another cloud extension that provides service
offerings for automating the provisioning of network attached storage (NAS)
with an NFS export name. The file systems can then be mounted into virtual
machines provisioned within TSAM Virtual Servers Projects. The
extension introduces the concept of Storage-only Project, which
allows managing the entire life-cycle of the file systems (create, expand, set
access, and destroy), in a secure multi-tenant environment. It works in
integration with IBM N series and NetApp FAS series
storage systems as sketched in the picture below.
Once you download the installation
package from the Integrated Service Management Library (http://www.ibm.com/software/ismlibrary?NavCode=1TW10TS0F) and install it on top of TSAM 7.2.2
platform, your cloud administrator can easily configure the Extension for
Network Attached Storage to provision NFS-mountable file systems. In fact, the
extension provides a plug-in to the Cloud Storage Pool Administration
TSAM application where she can enter the hostname of the workstation running the
OnCommand NetApp management software, and the credentials to
access it. Then the extension automatically discovers all the storage resources (NetApp
Datasets) from the underlying storage systems and makes them visible as
TSAM Storage Pools. At that point the cloud administrator can regulate
access to the storage resources using the TSAM way of associating storage pools
and quotas to customers,
and that’s it, the extension is configured. Now you can delegate to your
customers the management of storage up to the assigned quota: the customer
administrators can start requesting storage for their virtual servers by
creating storage projects and add, expand, and delete file systems. The entry
point for this is the Tivoli Self Service Station – Storage Management folder
(showed in the picture below).
the Create Storage Project offerings brings a simple user interface for
requesting file systems and assigning them to teams of users (see example pictures
The customer administrator has to
enter a prefix for the NFS export name, a TSAM Storage Pool from which to carve
the storage, and the size of the file system, that’s it. She can decide to
create many file systems with same characteristics by increasing the value of
the “Number” spin control. She can decide to make the file systems available to
all the teams of the customer by checking the “Access to All Teams” box: by
default the storage is only visible to the team of users that owns the storage
Note that once the storage project
has been created, the file systems cannot be mounted yet into virtual servers because
there is no ACL set on the IBM N series boxes for them. To do so, the customer
administrator creates TSAM Projects with Virtual Servers, and associates file
systems to the virtual machines belonging to the project: the extension
automatically updates the access control list (ACL) of the NFS export name
adding the IP address of the virtual machines. When the user logs in, she can
mount the file systems and use them (she gets the information of the NFS export
name with a notification e-mail).
In summary, the predefined functions
that you get with the TSAM Extension for NAS storage are:
Service offerings for managing the entire life-cycle (create, expand,
destroy, set access) of shared file systems accessible with the NFS protocol;
Service offering for authorizing virtual servers to mount storage;
Administrative graphical user interface for discovering NetApp Datasets
into TSAM Storage Pools and restricting usage by customer.
There are no predefined features to
create and manage NetApp Datasets neither vFilers to create customers silos.
For example, what if you want to automate the creation of a vFiler and of a
couple of storage pools – gold and silver, upon on-boarding of a new customer?
There are no predefined features to authorize
the shared file systems to anything but a virtual server within virtual servers’
project. What if you want to automatically attach a file system to a VMWare Cluster
as backend data store for VM images upon creation in a storage project?
Well, the TSAM Extension for NAS
storage provides low-level Tivoli Provisioning Manager (TPM) Workflows and
Tivoli Platform Automation engine (TPAe) Runbooks that can be used to implement
such automations in custom extensions that you can write based on best
practices described in the TSAM platform extensibility guide.
Most generally accepted definitions of Cloud Computing imply the notion of Pay per use. For a Service Provider this means defining how they intend to bill for Cloud Services, while for a Cloud enabled DataCentre in the enterprise this implies some form of showback/chargeback model. As for those consumers actually using the Cloud, they want to understand the financial implications (what will it cost?) before committing their workloads to it.
As a Cloud User
- Do you want to see what your project will cost before you provision it?
- See a price list for all the services you can provision - comparing prices for different options?
- Use a calculator to help you predict what a project will cost per month (or day or year)?
- See what the effect of changing the resources used by a project will do to the cost?
As a Cloud Provider
- Do you want to define different prices for a Service depending on the options that the user chooses?
- Set different prices for each service for different customer groups?
The following screenshots illustrate how the new cloud cost management
capability delivers solutions to these problems. The new TSAM Extension for Usage and Accounting is available to download now via the ISM Library
See the Prices for the different Cloud offerings and compare different options
first dropdown in the view shown below shows the Offerings that are available to the customer.
Offerings can be anything the Cloud provider chooses to make available, for example: Virtual Servers, Storage or even PaaS or Saas offerings. The consumer can see up front what the different rates are for each component, and compare these across different offering types..
See what it would cost per month to run a new project in the Cloud
In this example, we want to have one machine to run an Application Server and one machine to run a Database and we need additional Tier1 storage in order to store the database data. The calculator shows how much this will cost per month overall and in terms of the two Service Offerings that this particular Cloud provides..
.Different customers can be assigned to different subscriptions
A subscription is a means to segment your customers into different groups such as by geography or customer type (direct, business partners etc).
In this example, the RATIONAL and TIVOLI customers are assigned to the US (United States) subscription. Customers with this subscription share the same set of available offerings and pay the same price for those offerings..
Offerings are defined once and then added to Subscriptions
Once they are part of a subscription, the actual rate values (price per unit) can be defined for each element of the offering template.
If you wish to join the TUAM group
to get more involved in reviewing new features and testing beta capability, then let me know and I can send you an invite.
If you are interested in attending our daily demo sessions (see https://www.ibm.com/developerworks/mydeveloperworks/blogs/9e696bfa-94af-4f5a-ab50-c955cca76fd0/entry/new_schedule_and_agenda_for_daily_demo_sessions_of_ibm_smartcloud_provisioning2?lang=en) but:
- you do not feel comfortable with our schedule
- you would like to discuss with us about functionalities that are not covered by the current agenda
- you would like to join an exclusive, fully dedicated to you usability session
Please post your request on the open beta forum
I recently found this article which discusses the rationale for cloud adoption: http://www.tmcnet.com/usubmit/2011/10/31/5893685.htm. One factor listed is capacity management - "Users are considering cloud for capacity
management issues including periodic demand peaks and better management
of data center growth, power, and cooling issues." This statement speaks to the maturity of clients surveyed who are considering cloud in that they have the visibility into their virtual environment to understand workload usage trends such as predicting peaks and projecting growth of data center resource consumption. In other words, before clients can leverage cloud for adding capacity for periodic demanding peaks, they first must have capabilities in place for visibility of existing infrastructure. I am curious to understand if any of those surveyed have optimized their virtual environment; meaning, have they rightsized their workloads and placed workloads in a way to maximize available capacity.
As part of the transparent development initiative, IBM SmartCloud Provisioning (formerly known as IBM Service Agility Accelerator for Cloud) launches a series of daily demos, starting from November 7th. Every session will take about one hour.
In this way you can have a look in almost real time at what is happening in IBM SmartCloud Provisioning development, learn about new and enhanced capabilities.
If you are interested in joining the sessions, here is the schedule in Central European Time (CET):
- Monday at 4:00 PM
- Tuesday at 11:00 AM
- Wednesday at 4:00 PM
- Thursday at 5:00 PM
- Friday at 11:00 AM
The sessions will be focused on image management.
If you would like to join, using your web browser, connect to
No password is required
Service Management Extensions for Hybrid Cloud is now available!
hybrid cloud offering is now enhanced with Service Management extensions from Tivoli to monitor and
secure the management of resources on both public and private clouds.
Service Management Extensions for Hybrid Cloud extend the
capabilities of Tivoli
service management and delivery solutions, including IBM Tivoli Service
Automation Manager, which enables users to request, deploy, monitor and manage
cloud computing services to create a more modern and dynamic data center.
IBM Tivoli Monitoring Software helps optimize IT infrastructure
performance and availability. The extensions can help increase control over
owned resources, better manage costs and data relocation processes, and help
ensure the security of critical data and other assets.
Leveraging Service Management Extensions on Tivoli Service
Automation Manager and Websphere Cast Iron Integration software, the Hybrid
Cloud solution provides key capabilities that:
Control and management: Define policies,
monitoring and performance rules for the public cloud in the same way as
on-premise resources. As a result, organizations can more easily control costs,
IT capacity and regulatory concerns.
Data Integration: Monitor, provision and
integrate to support “cloud bursting”--dynamic relocation of workloads from
private environments to public clouds during peak times.
Security: Enable better control of users’
access by synching the user directories of on-premise and cloud applications. The
automated synchronization means users can only gain entry to the information
they are authorized to access.
Service Management Extension are available free of charge
via the IBM Integrated Service Management Library.
Service Management Extensions for Hybrid Cloud
Ok, I admit, I was among the early adopters of the late nineties to
get hooked on VMWare. In fact, as an open source advocate I remember
playing with "
freemware", qemu, bochs, openVZ, and several
other x86 virtualization technologies. Likewise, I was among the first
to start using Amazon's Elastic Compute Cloud (EC2). I've been hooked
by x86 commodity hardware virtualization for a long time, and I thank
VMWare and Ed Bugnion in particular for that. But why choose VMWare
Ten years ago when the
CPUs made it hard to virtualize efficiently, VMWare was great. After
2003 if you were mostly interested in linux (king of the cloud) Xen was
an excellent open source alternative to virtualize x86 commodity
servers. In 2006 Amazon launched their EC2 service which would become
the defacto cloud standard. EC2 is built on Xen and is probably the
single biggest x86 virtualization environment in the world. Several
hundred thousand of my closest friends have found EC2 to be a fantastic
compute platform that goes beyond server virtualization, all without a
trace of VMWare. So why choose VMWare now?
modern CPUs include specific support for virtualization making it
easier to deliver efficient virtulaization without Xen's paravirt trick
or VMWare's innovative code patching. Current linux kernels include
support for KVM and I believe upstream kernels will again support Xen
natively. I remember when RedHat bought Qumranet,
developer of KVM, SPICE, and SolidICE (a desktop virtualization
technology) in 2008. Back then KVM didn't compare to VMWare. It
certainly was not "
back then. Three years later, KVM has matured extremely well. I think it really is "
for commodity OS virtualization. In my cloud development efforts I've
run hundreds of thousands of VMs on Xen and KVM during the past 2 1/2
years. While I really respect Xen, I've come to like and appreciate KVM
on modern CPUs since it's just so simple and easy to use. Today there
are so many "
choices for x86 virtualization from
Xen, KVM, and VirtualBox to Hyper-V, which Microsoft is practically
giving away just to keep Windows relevant in the datacenter. So why
choose VMWare now?
Is low end disruption
a threat for VMWare? Linux and Apache are certainly well established
in the datacenter preventing Microsoft's dominance over the desktop to
spill into the datacenter. Ten years ago when Windows had 90-something
percent market share of desktop computers, I myself considered Microsoft
an untouchable giant. Today, however, I think they're doomed because
Apple is cooler, all the kids have 'em along with iphones and and
ipads. By analogy, VMWare should be very concerned. IMHO, they can and
will lose their dominance and I think they'll do so by the classic Innovator's Dilemma
VMWare continues to cater to their traditional high end customers.
Meanwhile, nearly three quarters of a million developers are using
Amazon's cloud as their platform for new software applications and
services. And the best part is Amazon's cloud doesn't even need or use
VMWare. In fact, neither does Google's AppEngine or Microsoft's Azure.
Sense a pattern? If you believe, as I do, that we're on the cusp of a
new platform war to deliver the next generation of applications and
services, then the key to success is the application development
community. VMWare may have operations teams sold, but developers love
the cloud. Interestingly, they may not even have the ops guys sold
after all. Here's a forum thread titled "VMWare, a falling giant
"According to Ars Technica, 'A new survey seems to show that VMware's iron grip on the enterprise virtualization market is loosening,
with 38 percent of businesses planning to switch vendors within the
next year due to licensing models and the robustness of competing
hypervisors.' What do IT-savvy Slashdotters have to say about moving
away from one of the more stable and feature rich VM architectures
survey found that VMware is the primary hypervisor for server
virtualization in 67.6 percent of shops, followed by Microsoft's Hyper-V
with 16.4 percent and Citrix with 14.4 percent. Wow, this doesn't even
compare to Microsoft's former dominance for which I recall seeing
numbers as high as 98% market share!
So why choose VMWare now? Maybe the question should be, "
Have you tried an open source hypervisor lately?"
Or better yet, "
have you tried a public cloud yet"
Frankly, I don't even like using hypervisors directly anymore as I find
clouds much more powerful and easier to use. Why don't you give ISAAC a try
? You can see what a real cloud is like while also trying out open source hypervisors.
There's still time to sign up for the IBM webcast: Managing the Cloud – Best practices for cloud service management
Organizations today are looking to cloud computing to deliver cost savings and faster service delivery. However, most organizations are still struggling to have the basic IT infrastructure that is necessary to take the leap to a robust cloud. This session will explain how service management can help provide the essentials to maintain service levels in the cloud and best practices based on IBM's work with customers. This information will provide the foundation for building and managing a cloud to meet your business objectives and transform IT.