If you would like to try out IBM SmartCloud Provisioning 1.2 core functionalities but you are worried you do not have time to spend installing it or you do not have enough hardware, you can download a special demo package from Integrated Service Management Library
It gets installed in a single physical box.
The system must use x86_64 processors that support virtualization.
In addition to that you need at least 3 GB memory and 30 GB disk space
The required operating system for this installation is Linux CentOS 6.0 64 bits.
In addition to that the following packages are required:
Then the installer configures the physical box as compute node, storage node, pxe server and dhcp server, then it creates a virtual image (the hypervisor is KVM) that acts as second storage node, webconsole, web-adminconsole, webservice, rest server, hbase and zookeeper.
Further installation details are available in the readme downloadable with the package.
I really liked the post Rapid deployments with IBM Smart Cloud Provisioning
that explains how simple and fast is to deploy instances using SmartCloud Provisioning.
But once the instances are deployed the next questions are:
- How can I "easily" manage them from patch management point of view ?
- How can I "ensure" that they satisfy my corporate and security standards ?
The solution is to integrate SmartCloud Provisioning with Tivoli Endpoint Manager (TEM) so that all the running instances will be connected to the TEM Server and managed according the configured security and corporate standards
It can be achieved exploiting the current integration between SmartCloud Provisioning and Image Construction and Composition Tool (ICCT) available in SmartCloud Provisioning version 1.2 performing the following steps:
- Using ICCT create a new bundle, the "TEM Agent bundle", that contains:
Extend an OS base image available in SmartCloud Provisioning adding the "TEM Agent bundle".
- TEM Agent installation package
- TEM masthead file.
this file is the digitally signed file that contains the information of where the TEM server is located
- a script that installs the TEM Agent and copies the TEM masthead file in the proper directory (ex: for Linux is /etc/opt/BESClient )
In this way a new image will be available in SmartCloud Provisioning with the TEM agent installed and configured to connect to the TEM Server.
After doing that when the extended image is launched the TEM agent will automatically start and connect to the TEM Server without requiring any user action.
Then from the TEM console you will be able to see and manage it performing actions and/or downloading fixlets.
This is just the basic integration and more advanced scenarios can be implemented, like for example exploiting the OVF parameters (as described in the topic Customizing virtual images with IBM SmartCloud Provisioning
) for configuring and grouping the TEM Agents but they will be described in my next blogs !
For further information on IBM SmartCloud Provisioning and Image Construction and Composition Tool see IBM SmartCloud Provisioning Infocenter
You can get some expert exposure to Tivoli Usage and Accounting Manager and our Cloud Cost Management solutions via
our demo's and workshops at Pulse 2012. Please come along to see us if
you are at Pulse this year.
The workshops are in the MGM Studio Ballrooms.
- IBM Service Delivery Manager
- IBM Tivoli Usage and Accounting Manager
- Cloud Cost Management.
In this lab exercise, you learn how to use IBM Tivoli Usage and Accounting Manager (TUAM) for metering and also accounting for projects and servers in a IBM Service Delivery Manager or IBM Tivoli Service Automation Manager based Self Service Virtual Server Management Environment. You analyze accounting showback reports in a demonstration setup. Optionally, you can create your own cloud financial model with the Financial Modeler. You can enter budgets, select rate codes, distribute costs, calculate rates, and run comparison reports.
- IBM Tivoli Usage and Accounting Manager
- Chargeback for IT Usage and Accounting
In this lab exercise, you learn how to use IBM® Tivoli® Usage and Accounting Manager (TUAM) to collect resource usage data, and to create an invoice and usage showback reports. Optionally, you can create your own financial model with TUAM
Financial Modeler. You can enter budgets, select rate codes, distribute
costs, calculate rates, and run comparison reports.
The labs will be hosted by our TUAM technical enablement specialist Joachim Schmalzried.
Please look out for us as well at the ISDM/TSAM Ped.
To learn more about the Cloud track at Pulse, see this site: http://www-01.ibm.com/software/tivoli/pulse/cloud/
Are your queries taking an
unusually long time to run for the amount of data that you've in your
Tivoli Data Warehouse? Do your Cognos based reports take forever to
render the screen with some results? There are several tweaks that can
be made to the data model and the reports to get optimum performance
while running your reports. Some of the best practices to be followed while developing your custom reports and data models are discussed here.
Editing the data model using Framework Manager to optimize performance of queries: [Continue Reading]
Cloud systems have made a huge
improvement in terms of tracking and performance. In “Rapid deployments with
IBM Smart Cloud Provisioning” blog, we have shown that virtual machines or
appliances can be started and configured in a matter of seconds. It has never
been so easy to create a virtual machine (VM), install software, and configure
middleware. However, with great power comes great responsibility… it is now
possible to create a VM, but what is its lifecycle? Will it be destroyed after
being used, is the starting image deprecated, or is there a better starting
image given the needed configuration and software install requirements?
IBM SmartCloud Provisioning provides
a component called IBM Virtual Image Library (also known as IVIL) to solve
common issues that arise in large scale virtualized environments:
tracking: Where are my images? How old are they? How are they related?
and security: What is in the images? Are they secure? What is the software
Are there images redundancies? Is
there any difference between two images?
list goes on
VIL can be integrated simply into
your virtualization infrastructure; the only requirement to start using IVIL is
the credentials required to contact the virtualization infrastructure. No changes to your current virtualization
environment are required. After credentials
are provided, IVIL can automatically determine the provenance, state, and the
content of each virtual image or virtual machine in the virtualization
environment. After the environment is
registered you will have a clear picture
of your various images, their content, history, and similarity with one
another. More important, as soon as IVIL
is used in the infrastructure, it can be used to move the images from one
hypervisor vendor to another and keep track of these migrations. To summarize, IVIL
not only keeps track of the changes of an image on one hypervisor but continues
when images are in a heterogeneous environment.
A common solution to track the
contents and versioning of images is by use of a naming convention, for
example, a name such as RHEL_6.1_WebSphere7.1_v2.1 implies the image is Red Hat
Linux 6.1 with WebSphere 7.1 installed, and that this is version 2.1 of this
image. It is feasible to use this approach with a small number of images but
becomes cumbersome and confusing with anything but small examples. Basic information that is typically attempted
to be conveyed includes:
is the OS and OS version?
applications are installed and their versions?
the latest patches and updates installed?
does this image relate to other versions of the same or similar images?
Using an image naming convention
can work in some cases and provide some of the needed information but it does
not scale beyond a small number of simple images. To solve this, IVIL provides versioning and
provenance control to understand where an image comes from:
What is provenance? Simply put provenance tracks the history of the
image as it has evolved over time in the virtual environment. It tracks how the bits that make up the image
came to be – through IVIL checkout operations, image clone operations, image
copy operations, and so on. It is used
to understand the lineage of an image from the perspective of the virtual
system which might or might not match with how the user of IVIL views the
For example, let’s assume that you
have an image called “A”. If you decide to start this image on multiple
instances of IBM SmartCloud Provisioning or if you decide to clone this image
possibly multiple times, then IVIL will keep track of the relation between all
the created images and instances. At any time, if a security flaw is found on
A, then you can infer that the associated images and instances are likely affected
also. IVIL provides this functionality not only for a single virtual
environment, but across heterogeneous virtual environments also.
What is versioning? Versioning is the logical user-defined
lineage of an image or virtual appliance; it is the way a user would think of
versioning his or her image functionality, for example this is version 2 of my
AccountsPayableService virtual image.
When an image is available with a particular application version, the OS
and libraries behind are often not important, only the application is. Is it
important to know its template? Not necessarily, only the information about the
OS is relevant. However, it is good to know the application version and if
there is a newer version available for this image or if a new image has been
released with the latest security patches. This is the versioning system in IVIL;
it helps to understand if there are other versions of the application in the
infrastructure, if some applications contain a patch or not.
To summarize, provenance is
oriented to infrastructure administration whereas versioning is more oriented
towards applications and workloads.
For example, let’s assume that we
want to provide version 1.0 of software S as image. By default, users can
decide to use software S and trigger any instance of image A. At a certain
point, the version 1.0 is deprecated and we must upgrade software S to version
1.1. Unfortunately, the OS distribution must be upgraded. A solution is to
reinstall the OS from scratch and install S version 1.1 on it; this new image
will be called B. These images do not have any common lineage from a provenance
perspective, however the content has a logical lineage to the user. Image A is
the parent of image B from a versioning perspective.
It is important to understand that
an image can have only one provenance parent but can have multiple version
parents. The second claim makes sense because an image may have multiple
applications installed and thus each one may be associated to a logical
This concludes the introduction of
Virtual Image Library component in IBM SmartCloud Provisioning. Next time, I
will introduce the concept of similarity between images and the power that it
provides in terms of debugging, infrastructure consolidation, licensing cost, and
ITM for Virtual Environments provides out-of-the-box pages in TIP (Tivoli Integrated Portal) that display well on a standard screen size of 1280 by 1024 pixels. If you use a higher screen resolution, you can customize the TIP pages to better suit your screen. Additionally, you can rearrange the components on a TIP page if you prefer that certain widgets are shown on a certain portion of the page. Let's examine both of these customizations.
Let's go through an example where we use a screen resolution of 1600 by 900 pixels, to see how we can change an ITM for Virtual Environments page to use the full screen real estate. Initially, when you pull up the ITMfVE VMware Cluster Dashboard, it looks like the following:
When we collapse the navigation tree in the view above, it then looks like:
For our particular screen resolution (1600 x 900), this is not optimal. However, we can easily modify the screen above to look like:
How do I find out if my warehouse has all the tables/views needed to run a report or create a custom report?
A new set of reports have been added under the folder "Prerequisites
Checking" in ITMfVE 7.1 for the VMware VI agent which provide you with a
prerequisite scanner that checks if the list of all tables/views needed
to run the predefined reports and the ones needed to support custom
reporting are available in the Tivoli Data Warehouse. It also checks the
VMware VI agent version and lets you know if the available version is
up-to-date or not. The reports also point you to appropriate
documentation which can be helpful in enabling historical collection and
summarization and pruning, creating IBM_TRAM schema, Time Dimension and
other shared dimensions like WEEKDAY_LOOKUP, MONTH_LOOKUP and
ComputerSystem and populating Time dimension. In addition to this, the
report also provides the list of attribute groups for the VMware VI
agent to guide the users while enabling historical collection.
Please note that in spite of having all the prerequisite tables/views
in the Tivoli Data Warehouse, you may not be able to run the reports [Continue Reading]
SmartCloud Provisioning is an infrastructure-as-a-service cloud able
to work with different types of hypervisors. You can easily install
and configure new compute nodes to run your virtual images on KVM,
VMWare and Xen.
is a very interesting sentence, and it seems to be very useful. First
time I read it, I thought: “Do I need to have 3 different images?
Can I have same image running on any hypervisor?” Answers are yes
to both question. Depending on how you would run your image you could
need have different images for different hypervisors or just use an
single image regardless the underlying hypervisor.
going deeper on how IBM SmartCloud Provisioning deploys virtual
images, I would discuss different hypervisors. Each of them has its
own peculiarity, allowing you to leverage different features,
implemented in different ways. This lead us to deal with different
hypervisor limitations. Here the following are most common
and Xen are able to manage SCSI devices, but not KVM
and Xen can use virtio drivers but not VMWare
uses a proprietary agent inside the guest OS (VMWare tools) which
does not work with Xen or KVM
uses vmdk file format, which is a proprietary format
of these differences can prevent an image from working on any
hypervisor. It is clear that if you do not pay attention on how you
create your base images, you might need different images for the
different hypervisors. So next step is understanding how we should
create a “magic image” able to run everywhere.
point is to figure out list of similarity between different
any hypervisor type support the raw format.
type: any hypervisor type supports ide device.
configuration: hypervisors do not require specific configurations,
but the manager could.
with IBM SmartCloud Provisioning you will not have any issue from any
of the previous points. In fact before creating a base image you
should just follow a few rules to ensure portability.
requires specific OS configuration regardless the underlying
hypervisor. You can find all needed information on how to build your
image at the info center site:
is important to use raw format, for the initial image. Here we have
an interesting problem: how to create a VMWare image in raw format.
The answer is very simple: we are creating a fully portable image, so
you can use KVM to build such master image and than run it
this point we have our raw image, fulfilling all requirements from
the hypervisor manager. What is next step? You need to register it
into IBM SmartCloud Provisioning. To do that you can use either the
administrative UI or CLI. Regardless the user interface you are using
just remember to use following settings during registration:
not enable virtio
finally have a fully portable image. IBM SmartCloud Provisioning will
decide by itself which will be the most appropriate compute node to
run your “magic image”.
though the described process is very easy, there could be some cases
where you cannot follow it. This is just in case you already have
your image in a proprietary format, and you need to use them. In this
case you have Virtual Image Library helping you. It is a very useful
IBM SmartCloud Provisioning component able to manage images
federating different hypervisors. It has capability to check image
into its own repository so that you can then check them out to a
different federated virtualization environment. And during this
process it will convert the image format for you.
it you will be able for example to check in a VMWare image and then
check the same image out to IBM Smartcloud Provisioning. Resulting
in a raw format image. Next interesting question is if it will run or
not. The answer strongly depends on compute node type and image
configuration. For what previously discussed, you should care about
configuration: as I said IBM SmartCloud Provisioning requires images
to have some OS configuration. To have final working image you must
ensure that the initial VMWare image has all required configuration
before stating importing it into Virtual Image Library. Otherwise
it will not able to start (for example if image does not have DHCP
configured, it will never get a valid IP)
type: if you only have KVM compute node within your IBM SmartCloud
Provisioning an image using SCSI device will not be able to run at
all. To have it running you must have at least one VMWare compute
node. If initial image is using an ide device, than you will not
have any trouble.
addition to image format conversion Virtual image Library is also
able to modify Windows device driver. In the process of moving an
image from VMWare to Virtual Image Library and than to IBM SmartCloud
Provisioning, the application change Windows configuration allowing
it to run into any hypervisor.
information about previous topics can be found at IBM info center
Thank you to everyone who attended the Global Tivoli User Community - Simplify Cloud Management with IBM SmartCloud Monitoring. If you did not have an opportunity to take advantage of this free training live - you can still take advantage by viewing the replay. Slides, demonstrations and audio are captured in the recording.
Registered attendees may view the Playback with this link:
To register to view this Playback:
SmartCloud Provisioning is designed to minimize the use of a centralized “command and control” approach, in favor of scale out management, where endpoints can participate in management activities and do not depend on a single configuration management database.
This allows SmartCloud Provisioning to handle multiple provisioning tasks in parallel, across an unlimited number of servers.
Cloud users can request deployments of virtual machines and have access to the provisioned systems in very few seconds, thanks to the parallel and distributed processing that happens transparently and under the covers.
Let’s drill down into the details about this distributed management approach.
SmartCloud Provisioning internally uses a peer to peer (P2P) messaging infrastructure to pass provisioning and management messages between agents, which contribute to the decentralized control.
Agents are installed on the compute nodes (i.e. the hypervisors) as well as on the storage nodes, where images and volumes reside.
The P2P connections between agents not only allow self-monitoring of their health in order to implement a low-touch management infrastructure, but also allow orchestrating the communications to achieve an effective load distribution and decentralized management of the requests performed by cloud users.
The P2P communication overlay is backed by a distributed lock service, which is based on ZooKeeper.
ZooKeeper is a distributed, open-source coordination service for distributed applications, which exposes a simple set of primitives that distributed applications can build upon to implement higher level services for synchronization, configuration maintenance, and groups and naming. It is designed to be easy to program, and uses a data model styled after the familiar directory tree structure of file systems.
Like the distributed processes it coordinates, ZooKeeper itself is intended to be replicated over a set of servers that must all know about each other. They maintain an in-memory image of state, along with a transaction logs and snapshots in a persistent store.
SmartCloud Provisioning agents connect to a single ZooKeeper server. Each agent maintains a TCP connection with the Zookeeper server, through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the agent will connect to a different server.
When a deployment request is received by SmartCloud Provisioning, the request is processed by the Web Services layer, passed to the management infrastructure, and managed by the agents and the ZooKeeper services.
The following steps describe in more details the internal communications, as depicted in figure 1 below.
This processing happens in a transparent way for the end user, who just sees the deployment request being served in few seconds.
- The Web Services layer takes a deployment request in charge (e.g. deploy 50 “Large” instances of image “LOB123-RHEL 6.0”), and triggers a first interaction with the ZooKeeper server to ask which agent in the compute nodes layer can take this request into account.
- The ZooKeeper server selects one of the available leaders in the compute nodes layer and returns this information back to the web service layer. The role of the selected leader will be to initiate an internal hand-shaking among the compute nodes agents to process the incoming request.
- The Web Service layer receives the information about which agent to contact, and opens a connection to that agent, passing the deployment request details.
- The selected agent takes care of the request and starts a “discussion” phase with all the other leaders (one for each rack) in order to distribute the load of the incoming request among all the agents that could provide resources to fulfill it. This happens leveraging the P2P connection between agents.
- Inside each rack, the leader triggers a parallel P2P interaction with all the agents on all the compute nodes included in that rack, to understand which agent can serve a portion of the incoming request. Each agent having enough free resources to serve “Large” instances answers the request coming from its leader, so that, at the end of the hand-shaking process each leader knows which portion of the incoming request can be processed by which agent.
- At this point, each involved agent knows which part of the incoming request it is supposed to process. To start the real deployment step, the agent asks the ZooKeeper server where to find the “LOB123-RHEL 6.0” image to be deployed, according to the incoming request. The ZooKeeper again answers the incoming requests by providing one of the available agent leaders on the storage nodes layer.
- When an agent receives back the information about which storage node to connect to, it opens a P2P connection with the related agent and asks for the image it needs to fulfill the deployment request.
- The storage node agent leader starts in turn a P2P communication with the other leaders asking for the selected image. Each leader inside its managed rack triggers other P2P connections to ask each managed agent if it has a copy of the requested image.
- The storage leader initiating the request collects back all the details about agents having a copy of the requested image and selects at least two of them (default redundancy required by SmartCloud Provisioning), returning the information to the calling compute-node agent. The compute-node agent at this point can access the image and starts the deployment of VMs, according to its capacity and to the amount of work it offered to serve.
As I said, this processing happens under the covers in a very fast way and the user does not have to worry about any of the steps above.
This allows reaching high levels of parallelism, decentralized management, as well as scale-out capabilities that can be easily reached by increasing the number of servers.
If you're interested in trying the SmartCloud Provisioning distributed management capabilities, you can download a trial version from the following link: