In this new post I would like to describe how you can script the building of virtual images using the Image Construction and Composition Tool provided by IBM Smart Cloud Provisioning.
The upcoming release of IBM Smart Cloud Provisioning 2.1 embeds, among other things, a new version of the Image Construction and Composition Tool. Image Construction and Composition Tool allows to build virtual images that are self-descriptive, customizable and manageable; at the end it produces Open Virtualization Appliance (OVA) images that can be deployed into a cloud environment.
the new features of this tool is the capability of performing image
management operations directly through a command-line interface. This
capability enables a set of new use cases through a scripting
command-line interface of Image Construction and Composition Tool
provides a scripting environment based on Jython (i.e. the Java-based
implementation of Python) and in addition to issuing commands
specific to the Image Construction and Composition Tool, you can also
issue Python commands at the command prompt.
such interface, you can manage the Image Construction and Composition
Tool remotely since you can download it to any machine and then point
to the system where the tool is running: it communicates with the
server using the HTTPS protocol so that all the communications are
encrypted. The command-line interface can be installed on both Linux
and Windows operating systems and can run in both interactive and
that can be managed in the Image Construction and Composition Tool is
modelled by a resource object on the command-line interface that
exposes a set of methods for performing the related management
actions. The following objects are available: software bundles
references (for defining software configurations to be deployed on a
virtual machine), cloud providers references (for defining the
hypervisors used by Image Construction and Composition Tool to build
and capture images), images references (for handling virtual machine
images to be used for import, extend, capture and export operations)
and users references (for administering the user of Image
Construction and Composition Tool ).
you have downloaded and configured the command-line to start a new
session in interactive mode you can issue the following command from
a shell prompt:
-h <icct server> -u username -p password
you get the interactive shell you can start issuing commands.
are a few examples.
get a list of all the images for a cloud provider, you can use a
command like the following:
import a software bundle and wait for the import to complete, you can
use a set of commands like the following:
if importingBundle.currentState == 'import_failed':
... print 'Bundle import failed!'
get a list of all the images, you can use a command like the
allImages = icct.images
can also use the Image Construction and Composition Tool command-line
interface in batch mode, by creating your own script and then
launching it. For example, to run a script called myScript.py
you can issue the following command:
-h <icct server> -u username -p password -f myScript.py arg1
samples come directly with Image Construction and Composition Tool.
They are located under the following directory:
cover some of the Image Construction and Composition Tool basic
flows, such as creating a new cloud provider configuration, importing
an image, extending an image, etc..
can use them as a starting point for creating your own workflows.
all for now.
have just provided a quick introduction of all the capabilities of
the Image Construction and Composition Tool command-line interface.
If you are interested in discovering more about Image Construction
and Composition Tool, its command-line interface and SCP 2.1, you
can have a look at what is included in IBM Smart Cloud Provisioning
In this new blog post I would like to
describe a root-cause detection scenario using IBM Smart Cloud
Given the ever increasing number of
virtual machine instances and VM images in a cloud ecosystem it is
becoming more and more important to track each of virtual image's
contents and configuration mainly for standardization and
Another situation where tracking this
content may also be useful is when there is the need to identify the
"drift" between a deployed virtual machine and the virtual
image that was used to create it, as for example in the scenario
As soon as a virtual machine gets
deployed from a virtual image its content will start to change; the
owner of that virtual machine begins using it by creating new files,
by using its applications, by installing/uninstalling software and so
on. Because of one of the above actions it
may happen that the system, or a specific application, may no longer
work correctly. At this point one of the things that
may be done to understand what could be the cause of such
malfunctions is to identify all the changes applied to the instance
compared to the source virtual image and look at them trying to
identify the “culprit” change in order to take appropriate
actions for repairing the situation. This is a typical scenario where the
IBM Virtual Image Library component of IBM Smart Cloud Provisioning
comes at help through its indexing
and drift analysis capabilities.
highlighted in a previous blog entry
the IBM Virtual Image Library is a tool that provides sophisticated
image-management capabilities a customer can use to tackle the
difficult issues of understanding and controlling the contents of his
virtual infrastructure. Let's see how this tool may help in
troubleshooting the scenario we have described above.
The first step is to identify the
failing virtual machine among the ones available in the IBM Virtual
Image Library repositories. The
tool continuously indexes the configured repositories of virtual
machines and images so that its data model is always up to date with
the actual content of the virtual infrastructure.
Once the virtual machine has been
identified the next step is to retrieve the virtual image from which
it has been deployed. This is another feature provided by the tool
that keeps track of the entire tree of relationships among virtual
images and virtual machines available in the environment.
The next step, if not already
previously done, is to run an indexing operation of the virtual
machine so that its content, in terms of installed applications, OS
information and file-level information can be retrieved and brought
into the tool's data model.
Once the indexing is complete the
source virtual image content and the virtual machine content can be
compared. A list of differences is presented to the user so that
he/she can review them and decide what differences may be the most
likely reason for the problem.
For example, from this report the user
may notice that a suspect application has been installed into the
virtual machine that shouldn't be there or that a configuration file,
used by the application that is malfunctioning, has been modified.
He/she can use these hints as a
starting point for troubleshooting the issue and for taking repair
The following movie demonstrates, by
means of an example, the capabilities described above.
What has been described here is just an
example of the drift analysis capabilities of the IBM Virtual Image
Library with the intent to give you an introduction to the advanced
features of this component. If you are interested in understanding
more deeply how the IBM Virtual Image Library works and to have a
summary of all of its capabilities you can take a look at the
Please join the Tivoli User
Community for their next webcast on IBM Smartcloud Consumer Monitoring. This webcast will be held on
Tuesday, January 22, 2013 at 11:00 AM ET, USA.
IBM SmartCloud Consumer Monitoring is a new product
developed for cloud consumers and service providers. An innovative new architecture embeds
monitoring technology in library images, so newly deployed VMs are discovered
and monitored within seconds of being instantiated. “Fabric Nodes” used innovative distributed
database technology to display data for nodes and applications, or logical
groupings of them, and run as virtual machines alongside the application
VMs. New fabric nodes come online as
needed, and shut themselves down when no longer needed, ensuring optimum use of
ABOUT THE SPEAKER:
Ben Stern, Executive IT Specialist, IBM Cloud & Virtualization products
Stern is an Executive IT Specialist.
For the past several years, he has defined Best Practices for Tivoli's
SAPM portfolio. Most recently, he has
taken on Best Practices role for the Cloud and Virtualization products.
The Official Tivoli User Community is the largest online and offline
organization of Tivoli professionals in the world – home to over 160 local User
Communities and dozens of virtual/global groups from 29 countries – with more
than 26,000 members. The TUC community offers Users blogs and forums for
discussion and collaboration, access to the latest whitepapers, webinars, presentations
and research for Users, by Users and the latest information on Tivoli
products. The Tivoli User Community offers the opportunity to learn and
collaborate on the latest topics and issues that matter most. Membership
is complimentary. Join NOW!
The DBMS placement in Cloud Solutions based on Tivoli Provisioning Manager (TPM) / Service Automation Manager (TSAM) / Service Delivery Manager (ISDM), plays a significant role into overall product function, performance, and how this relates to the evolving workload.
A typical setup approach is to install TPM/TSAM with the DBMS co-located.
This is the default setup option in the TSAM installation and TSAM-VM-image which is included in the ISDM solution.
Over time, based on increasing workload, capacity planning, or production requirements, it may be desirable to move the local database to a remote node, with the goal to achieve greater scale and to exploit additional resources.
A white paper
is available for this purpose in the Integrated Service Management Library.
The referenced paper, has been recently updated to version 2.4, and describes how to relocate the DBMS in existing TPM / TSAM / ISDM solutions.
Welcome to the Cloud/Virtualization Management
blog is one of several within the Service Management Connect community,
and its purpose is to provide readers with ideas and perspectives about
the Cloud/Virtualization Management solution directly from the
technical experts. Follow this blog, and you can get tips, tricks, and
perspectives on several Cloud/Virtualization Management topics, including:
- Technical tips and tricks
If you have specific topics for which you would like to see blog entries, don't hesitate to provide feedback.
I've recorded a couple demo movies to show the capabilities of the new IBM Virtual Image Library v1.1 that comes with the SmartCloud Provisioning v1.2 product. You can use the links below to go directly to the movies:
If you would like to try out IBM SmartCloud Provisioning 1.2 core functionalities but you are worried you do not have time to spend installing it or you do not have enough hardware, you can download a special demo package from Integrated Service Management Library
It gets installed in a single physical box.
The system must use x86_64 processors that support virtualization.
In addition to that you need at least 3 GB memory and 30 GB disk space
The required operating system for this installation is Linux CentOS 6.0 64 bits.
In addition to that the following packages are required:
Then the installer configures the physical box as compute node, storage node, pxe server and dhcp server, then it creates a virtual image (the hypervisor is KVM) that acts as second storage node, webconsole, web-adminconsole, webservice, rest server, hbase and zookeeper.
Further installation details are available in the readme downloadable with the package.
ITM for Virtual Environments provides out-of-the-box pages in TIP (Tivoli Integrated Portal) that display well on a standard screen size of 1280 by 1024 pixels. If you use a higher screen resolution, you can customize the TIP pages to better suit your screen. Additionally, you can rearrange the components on a TIP page if you prefer that certain widgets are shown on a certain portion of the page. Let's examine both of these customizations.
Let's go through an example where we use a screen resolution of 1600 by 900 pixels, to see how we can change an ITM for Virtual Environments page to use the full screen real estate. Initially, when you pull up the ITMfVE VMware Cluster Dashboard, it looks like the following:
When we collapse the navigation tree in the view above, it then looks like:
For our particular screen resolution (1600 x 900), this is not optimal. However, we can easily modify the screen above to look like:
Modified on by FF0J_Snehal_pansare
Today with multiple Cloud solutions there are awesome improvements in deployment of enterprise applications on Cloud. There are tools available to design these applications in a convenient manner. You can just drag and drop components, pick them and build them in numbers as needed. Though deployment and designer tools work perfectly individually, there still exists a big gap as they work in silos. This traditional manner of designing topologies of applications do not fit well when they are actually deployed in Cloud. Though deployment of enterprise applications is automated, a manual process is not automated which needed to take the topologies/patterns designed from designer tools on a solution which can deploy them in an automated manner. This is the area where lies the real problem and does not help user to take the full advantage of technology.
I am going to talk about a tool (Deployment, Planning and Automation for Cloud) which helps bridge this gap and makes a simplistic approach to application design construction and deployment with the optimum time to deploy and build complex applications.
Deployment Planning and Automation (DP&A) Cloud Accelerator is the integrated solution from Rational And Tivoli group of products
This integrated solution helps you to manage your environment for greater resource sharing by allowing you to plan the applications deployment patterns. It not only makes design of applications deployment patterns easy but also their deployment into production environments in an automated way. Automated Deployment includes deploy virtual servers, install and configure enterprise middleware and applications in a single automation workflow. This workflow is generated from a visual model of your deployment environment. You can also govern and share application artifacts, standard templates, and deployment plans between development and operations teams and trace development artifacts to deployed instances to support change management.
Figure below shows an end-to-end automation of all the deployment steps using DP&A for the cloud. Deploying a typical standard Java Enterprise Edition (JEE) application in a virtual environment could potentially require hundreds, if not thousands, of parameter specifications (for which often default values are used for simplicity). DP&A for the cloud allows all such parameters to be represented and controlled in a single solution topology in RSA. TSAM and RAFW automation generated from such a topology can complete an end to end deployment on a VMware ESX server in about 45 minutes starting only from a base Linux or AIX virtual image.
The DP&A for the cloud integration asset lets Solution Architects to use a solution modeling tool: Rational Software Architect: (RSA), to generate workflows for multiple deployment engines (Tivoli Service Automation Manager (TSAM) and Rational Automation Framework for WebSphere (RAFW)) for end-to-end solution deployment. The generated workflows include, in the same flow, TSAM steps for provisioning VMware virtual machines and/or System p LPARs and installing middleware and RAFW steps for configuring middleware and installing applications.
Deployment Planning and Automation for Cloud had its first release (v 2.0.0) and available for use.
An article is also available which talks about how it helps to minimise the time to deploy composite applications using DP&A for Cloud.
The best example of real-time scenarios which Customer of this tool are using :
An architect designs a topology of their enterprise applications which could be single-tier, dual-tier or multi-tier composed of web servers (like WebSphere) and database servers (like DB2, SQL or Oracle). He can also specify topology specific characteristics in the design. Topology created is exported now creating 2 components, one as Cloud Service Archive representing the topology and an automated steps for RAFW to deploy enterprise applications.
For example : Applications will be deployed on Linux Server with English localization and in DMZ network with installation type as “secure”. Cloud Deployment tool (TSAM) finds a VMware cluster available at deployment time to deploy the servers with the corresponding characteristics mentioned above.
An EAR of the applications can also be designed and called for the deployment on the topologies created with the designer tool (RSA).
Furthermore, deployed patterns can be analysed for their performance. It can help to update the topology with the improvements and deploy them back on the Cloud Environment.
For example : A topology with some number of web servers and database servers is deployed and analysed for its performance. Team find out that, there is a delay in response for one of the applications. For better load balancing, architect adds another web server in the topology and exports it in Cloud Service Archive.
This updated topology is now available for quick deployment as well as can be easily transferred to another environments for the deployment.
Modified on by GeorgeMina
You’re probably wondering what the classic cue for a film take has to do with DevOps. Well in fact there are many similarities between it and continuous software delivery. But before we go into movie making let’s set the stage. The emergence of big data, cloud and mobility to address today's business challenges is driving a proliferation of new applications on top of the core business applications from vendors such as SAP, Microsoft and Oracle. As a consequence, IT environments are becoming much more complex, more diverse, delivered across multiple form factors … and they are becoming even more interconnected and interdependent.
Now, DevOps plays a critical role in addressing the execution gap between the innovation seekers and those delivering the application innovation. It’s imperative to resolve this gap when you consider that nearly 50% of applications are rolled back after they are released to production. This problem is further exacerbated by the fact that minor code changes can take several weeks to be released to customers.
The good news is that IBM has developed a blueprint for continuous delivery of software innovation that helps resolve the execution gap. This is achieved by expanding the traditional notion of DevOps to unite customers, business owners and IT teams (development and operations) in a continuous feedback loop that balances quality, cost and speed. The result is higher productivity, quicker time to value and happier customers.
Looking under the DevOps covers you’ll see that lights are critical for continuous software delivery. These lights enable the developer to determine what is needed on the set prior to application deployment – essentially establishing the underlying environment.
With Continuous Release & Deployment, developers can request an application environment from the service catalog and provision and configure the servers and middleware to automate deployments and manage releases for test and production.
So the lights are turned on but you now need a camera to see the entire stage and zoom in on certain aspects of the set. The problem is that not all cameras are created equally and many don’t have the right lenses to provide both a micro and macro view of the environment. Similarly, many organizations today are managing their mission-critical applications in a fragmented manner with highly customized management solutions which lack automation and visibility across the complex landscape. The impact of a fragmented, incomplete service management strategy is staggering. Organizations are unable to assure service levels which leads to significant business disruptions that manifest themselves in multiple ways including lost revenue, customer dissatisfaction, and compliance/regulatory issues.
By driving visibility across the stage, organizations are able to truly transform their business and provide Continuous Monitoring across their entire infrastructure. Developer and IT operations teams are able to better understand the performance and availability of their applications and early feedback lowers cost of errors and change, and helps steer projects toward successful completion.
The lights are shining, the camera is turned on and the director yells out action! Scene 1 is taking place but the director notices that the lead actor just screwed up one word of his first line. Or perhaps one of the stage props fell during the shoot. No problem….the prop is reset, the actor gets his lines right and take 2 is successful.
In the DevOps domain, Continuous Feedback and Optimization is embedded into the structure so that problems are resolved fast and customer expectations are met. Collaborative incident management enables operations teams to check the status of defects affecting production, and to instantly notify development teams of problems found in production for faster resolution.
DevOps enables organizations to seize new market opportunities by quickly and efficiently deploying software innovation. This is achieved by embedding lights, camera and action into the architecture ie continuous deployment, monitoring and optimization. This continuous feedback loop helps drive quicker time to value of software and higher quality which in turn translates to improved customer satisfaction.
Modern Cloud infrastructures are built leveraging thousands of highly distributed servers, used to provide services directly to customers over the Internet. The service provider has two extremely important objectives, which, unfortunately, are to some degree contrasting: a) ensure continuous availability of the Cloud service, and b) contain the cost of the infrastructure and administration (CAPEX and OPEX).
There are several factors that have an impact on the availability of services, mostly related to infrastructure failures. Failures are not only related to unrecoverable hardware outages, but also to recoverable OS or middleware failures.
Not so long ago, the most common approach to high availability was to assume one could deploy infrastructures with the highest Mean Time To Failure (MTTF) possible, which required expensive systems and assumed the possibility to write error-safe software applications. It was also assumed that some degree of down-time was acceptable, with vendors boasting of the number of 9's that they could support (e.g. 99.999% availability). In today's always-on Internet, any downtime of major services becomes headline news. The traditional approach is no longer applicable, and a new approach has to be considered.
Given the requirement to reduce infrastructure costs, service providers are using commodity hardware. Given also the requirement to reduce operational costs, hardware failures are commonly dealt with by directly replacing the failed component rather than manual debugging and recovery by skilled (and expensive) administrators. Thus, to maintain the objective of continuous availability of the service, the Cloud system must be built in order to expect failure of the underlying infrastructure, and not only for temporary periods but it must assume that components will disappear forever. This cannot be limited to only hardware components, as no matter how well a software element is tested, unexpected edge conditions will appear at some point-in-time. So, to guarantee continuous availability, a Cloud solution must also expect its own components to fail too.
Given that we are forced to expect failure, the high MTTF approach is no longer valid, and instead we have to increase availability by flipping the approach to minimizing Mean Time To Recovery (MTTR). The quicker the system can recover from failure, the higher the availability of the service will be. Given however that even a tiny percentage of downtime is no longer acceptable, we also need a means to maintain service availability during the recovery process. One way of doing this is through providing redundancy of all critical services within the Cloud solution.
SmartCloud Provisioning is designed according to the ROC principles, because it is based on a highly distributed, redundant and robust infrastructure, with near zero downtime, and automated recovery across heterogeneous platforms, and it does not require expensive systems, but can run on a relatively low-cost commodity infrastructure.
The key factors that allow SmartCloud Provisioning to be a low-touch and robust cloud infrastructure are the following:
the infrastructure is as stateless as possible: this avoids issues related to single points of failure
management agents are deployed on the physical nodes of the infrastructure (compute nodes and storage nodes) and are connected in a peer-to-peer network to form a self-monitoring and self-managing infrastructure
core services are redundant being deployed in clusters to tolerate individual faults
master images are replicated in multiple copies across the storage nodes in the storage cluster; this tolerates HW failures of the storage nodes in the cluster as well as network failures when accessing one copy of the image
hypervisor (compute) nodes are deployed via a stateless boot so that it becomes easier to re-deploy a failing hypervisor by simply rebooting it and getting a fresh new copy of the hypervisor image. This also allows easy deployment of new nodes if needed, to augment the capacity of the infrastructure
Let's consider some typical failure scenarios that can happen in a real environment and let's see how the SmartCloud Provisioning is designed to tolerate them and react appropriately.
First example is related to the management agents that are used by SmartCloud Provisioning to perform the standard provisioning operations.
Management agents are deployed on both the compute nodes and the storage nodes and are organized in dynamic hyerarchies, where a leader (manager) is dynamically elected. The leader is just the entry point for distributing the requests across the infrastructure and a coordinator of any operation, but this role does not imply any special information being associated with the agent itself (stateless infrastructure): any agent can be a leader.
All the agents have a watch-dog mechanism that is used to prevent, detect and correct failures; they also monitor each other in the neighborhood and can start simple actions to fix other agents issues.
So, if an agent fails, the watch-dog mechanism tries to restart it. If the watch-dog is not able to restart the agent, neighbours try some simple actions to restart the failing agent. If the agent cannot be restarted, the system keeps on working without that node, thanks to the redundant infrastructure.
If the failing agent was a leader, and it cannot be restarted, the managed agents can re-elect their leader dynamically, without losing any information.
Another example is related to failures either in a storage node or in a compute node.
If a storage node fails, thanks to the redundant deployment and to the multiple copies of the same image available in the storage cluster, the deployment of VMs can continue without issues, and the leader agent will try to restart the failing node.
If a compute node fails, the leader detects the failures and stops sending requests to that node. Moreover it tries to restart the node, forcing a fresh copy of the compute node to be re-deployed via PXE boot.
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
IBM SmartCloud Provisioning (previously known as IBM Service Agility Accelerator for Cloud) fully embraces the transparent development philosophy.
Starting from today, you can join our open beta program. This Program is intended to raise awareness of IBM SmartCloud Provisioning with the widest possible
audience and provide a feedback mechanism to let you tell us what you like about the product, and what we could improve.
The code is downloadable from https://www14.software.ibm.com/iwm/web/cc/earlyprograms/tivoli/P2044/index.shtml
Due to the open nature of this beta program, the code is time bombed, you can use it until december 31st 2011.
You can discuss issues related to the code drop into this forum: http://www.ibm.com/developerworks/forums/forum.jspa?forumID=2673
There is a brand new demo for IBM SmartCloud Provisioning 1.2.
It is launchpad based hence allowing you to dive into various capabilities individually with a short and quick overview.
It covers the main IBM SmartCloud Provisioning capabilities:
- fast virtual image provisioning
- easily extending your cloud infrastructure
- fault tolerance
- low touch
- image management:
- controlling image sprawl
- drift analysis
- image search
New VMware Additional Disk Extension gives the customer the ability to:
* Map one or more VMware datastores to a TSAM Cloud Storage Pool for provisioning additional disks
* Associate those Storage Pools with one or more customers
* Apply a per customer quota at a Storage Pool level
* Control whether disks are thin provisioned at a Storage Pool level
* Create and automatically format/mount extra disks (Windows drives or Linux mount points) from a Cloud Storage Pool when provisioning a new virtual machine
* Backup and restore server images including any additional disks
The extension was released in October and is available free of charge for download in the IBM Service Management Library.
Download VMware Additional Disk Extension here - http://www.ibm.com/software/ismlibrary?NavCode=1TW10TS0B
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
- Rapidly scalable deployment designed to meet business growth
- Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
- Reduced complexity through ease of use and improve time to value
- Reduced IT labor resources with self-service requesting and highly automated operations
- Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
With the new release of IBM SmartCloud Provisioning this week, you can try and see firsthand the potential of this breakthrough technology to accelerate your journey to cloud. And if you want a preview of what’s in development, you can join our Open Beta program for access to beta-level code.