Rethink IT. Reinvent Business
Pino 100000UGHN Tags:  servicemanagementconnect service-management virtualization cloud ism user-data 4,871 Views
I really liked the post “Rapid deployments with IBM
SmartCloud Provisioning” that explains
how simple and fast it is to deploy instances using SmartCloud Provisioning.
IBM SmartCloud Provisioning provides in the launch instance panel, and also using the CLI, the “user_data” text field that can be used for this scope.
It is inspired to the Amazon EC2 instance metadata and here you can find an interesting article on it: http://alestic.com/2009/06/ec2-user-data-scripts
The “user_data” field is a free text field so for example it can contain:
The launched instance can easily retrieve the user data field invoking the predefined URL http://169.254.169.254/latest/user-data and processes it according the needs.
It can be achieved by exploiting the current integration between IBM SmartCloud Provisioning and the Image Construction and Composition Tool (ICCT), available in IBM SmartCloud Provisioning version 1.2, creating a new bundle, the User-Data consumer bundle that contains a script that retrieves the “user-data” and process it based on his needs.
An interesting scenario is the capability of passing directly one or more scripts to be invoked at deployment time to have a really dynamic configuration. In this way, a new image can be configured/customized at deployment time.
If you want to have more information on user-data capabilities and examples take a look at the Ubuntu cloud-init component described here https://help.ubuntu.com/community/CloudInit
For further information about IBM SmartCloud Provisioning and Image Construction and Composition Tool see IBM SmartCloud Provisioning Information Center.
The open beta program for the upcoming IBM SmartCloud Provisioning release started:
marcese 11000065AG Tags:  low-touch pxe cloud smartcloud isaac configuration installation image 6,274 Views
In my previous blog I talked about the speed of deployment of virtual machines when using IBM Smart Cloud Provisioning. I showed that virtual machines can be started and configured in a matter of seconds and I described a little in bit in details how this could be achieved in terms of the internal infrastructure of the product.
Let's consider, for example, how you can manage of one of the core elements of the solution: the compute node (i.e. the node where the virtual machines are hosted and run).
Several variations from the basic setup described above are possible depending on the actual topology of the environment and on the kind of nodes to be installed.
If you're interested in trying IBM Smart Cloud Provisioning, you can download a demo version from here:
A usual adoption pattern for cloud computing are desktops. It's really straight forward because in general each company has standardized desktops: only some specific version of the operating system are supported, only specific flavours, only some applications are allowed and typically everything is managed by the IT team.
If we think at the benefits of adopting desktop cloud, some of them really jump powerfully in front of the eyes: the IT team can really enforce standardization (e.g. you can select as desktop only one of the proposed flavours); the maintenance of the hw becomes far easier given its consolidation; old, out-dated PCs can be used just as connectors to the desktop hence gaining new life. From the desktop user point of view he does no longer need to carry on some company asset to work: healthier (no more heavy hw to take home or travelling); safer (data is in the cloud).
But this is nothing new, desktop cloud solution are already on the market, so let's see if IBM SmartCloud Provisioning can bring additional benefits to the desktop world.
What if we start dealing with non-persistent desktop images?
Non-persistent images are the ones that disappear once you shut them down. You might be asking yourself “well, that's not so clever, what about my data? Are they lost?”. This is actually a very good point and this is the keystone of the benefits coming with the adoption of non-persistent images.
The idea is that all user data get stored into external (persistent) volumes that can be attached/detached on demand to the non-persistent image.If we now apply this technology to the desktop world, it shed an interesting new light on some typical and painful scenarios:
In a traditional infrastructure, when the operating system goes or is getting close to go out of maintenance, a massive migration campaign starts: all desktops need to be migrated. Now the migration statistically does not go smoothly for all users and hence some of them will be stuck even for days. If you use non-persistent images, you can easily overcome this either creating a new master image with the new operating system or upgrading a single instance of the image, do your test campaign to make sure everything keeps working, then deploy it in as many instances as the desktops you need to upgrade are, attach to the new images the volumes with the user data and get rid of the old images. If you leverage the incredible deployment speed of IBM SmartCloud Provisioning, you'll have a brand new set of desktops in minutes.
Analogously we may think about patching the operating system or a software running on the desktop: they key idea behind this is that you're always going to patch either the operating system or a specific software, never the user data that keep living into separate volumes.
If we think at the compliance aspect, remember that the user cannot save any change he does on the boot disk of the image since nothing gets ever stored on the disk. He is only empowered to write his own stuff on the additional volumes. This should discourage him from even trying installing new software or editing the operating system configuration, since everything will be lost at the first shutdown.I know in your company you may have different configuration flavours of the same operating system according to the department for which the desktop is tailored. For example you may need to have different firewall configurations according to the security level the end user is entitled to. Well, with IBM SmartCloud Provisioning you can leverage the User Data field at deployment time to specify these special configurations. Of course this may even not be shown to the end user, but you may mask it enlarging the list of the offering with the specific configuration. Under the covers the instance is launched with the proper parameters: no master image duplication, no manual configuration; everything is automated and standardized.
What about optimizing resources? Desktops by their nature have all the same operating system and configuration (at least for department),usually they come also with the same applications installed on top. If you deal with non-persistent images you are just saving lots of duplicated, useless copies of the same operating systems and software on the disk. Then, if you think that once the desktop is shut down, its resources are released (i.e. cores and memory), you can better optimize your hardware using those resources for other applications/users (they may even be server application or desktops for users residing in a different timezone).
New employees coming on board? A project outsourced to an external work-force?
You may want to have this people productive more than immediately. With IBM SmartCloud Provisioning, their desktops will be up and running in seconds.
See IBM SmartCloud Provisioning working in a recorded demo
The fix is downloadable from Fix Central and it is identified as 1.2.0-TIV-ISCP-IF0001
It addresses the following problems:
After the iFix installation the IHC component will be upgraded form version 0.20.2 to 1.0.0
For further details read the readme file associated with the interim fix
We're pleased to make available as a beta Service Health for IBM SmartCloud Provisioning. As this is a beta we welcome any and all feedback.
Service Health (Beta) for IBM® SmartCloud Provisioning provides prebuilt integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring. This solution allows you to easily monitor your IBM SmartCloud Provisioning infrastructure to identify and react to issues in your environment.
This solution is available via the IBM Integrated Service Management Library( ISML ). You can find it here -> Service Health for IBM SmartCloud Provisioning. Please use the "Comment or Review" link on that page to post feedback. You may also use the "Contact Provider" link as well.
There is a brand new demo for IBM SmartCloud Provisioning 1.2.
It is launchpad based hence allowing you to dive into various capabilities individually with a short and quick overview.
It covers the main IBM SmartCloud Provisioning capabilities:
Enjoy it clicking here
AHUP_Gianluca_Bernardini 120000AHUP Tags:  provisioning cloud bot decentralized smartcloud scale-out hslt p2p 6,447 Views
SmartCloud Provisioning is designed to minimize the use of a centralized “command and control” approach, in favor of scale out management, where endpoints can participate in management activities and do not depend on a single configuration management database.
This allows SmartCloud Provisioning to handle multiple provisioning tasks in parallel, across an unlimited number of servers.
Cloud users can request deployments of virtual machines and have access to the provisioned systems in very few seconds, thanks to the parallel and distributed processing that happens transparently and under the covers.
Let’s drill down into the details about this distributed management approach.
SmartCloud Provisioning internally uses a peer to peer (P2P) messaging infrastructure to pass provisioning and management messages between agents, which contribute to the decentralized control.
Agents are installed on the compute nodes (i.e. the hypervisors) as well as on the storage nodes, where images and volumes reside.
The P2P connections between agents not only allow self-monitoring of their health in order to implement a low-touch management infrastructure, but also allow orchestrating the communications to achieve an effective load distribution and decentralized management of the requests performed by cloud users.
The P2P communication overlay is backed by a distributed lock service, which is based on ZooKeeper.
ZooKeeper is a distributed, open-source coordination service for distributed applications, which exposes a simple set of primitives that distributed applications can build upon to implement higher level services for synchronization, configuration maintenance, and groups and naming. It is designed to be easy to program, and uses a data model styled after the familiar directory tree structure of file systems.
Like the distributed processes it coordinates, ZooKeeper itself is intended to be replicated over a set of servers that must all know about each other. They maintain an in-memory image of state, along with a transaction logs and snapshots in a persistent store.
SmartCloud Provisioning agents connect to a single ZooKeeper server. Each agent maintains a TCP connection with the Zookeeper server, through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the agent will connect to a different server.
When a deployment request is received by SmartCloud Provisioning, the request is processed by the Web Services layer, passed to the management infrastructure, and managed by the agents and the ZooKeeper services.
The following steps describe in more details the internal communications, as depicted in figure 1 below.
This processing happens in a transparent way for the end user, who just sees the deployment request being served in few seconds.
As I said, this processing happens under the covers in a very fast way and the user does not have to worry about any of the steps above.
This allows reaching high levels of parallelism, decentralized management, as well as scale-out capabilities that can be easily reached by increasing the number of servers.
If you're interested in trying the SmartCloud Provisioning distributed management capabilities, you can download a trial version from the following link:
Antonio_Di_Cocco 060001977Q Tags:  smartcloud cloud image-library imag-portability 6,782 Views
IBM SmartCloud Provisioning is an infr
Jacques.Fontignie 270000UHFU Tags:  provenance versioning image-library virtualization cloud image library 5,373 Views
Cloud systems have made a huge
improvement in terms of tracking and performance. In “Rapid deployments with
IBM Smart Cloud Provisioning” blog, we have shown that virtual machines or
appliances can be started and configured in a matter of seconds. It has never
been so easy to create a virtual machine (VM), install software, and configure
middleware. However, with great power comes great responsibility… it is now
possible to create a VM, but what is its lifecycle? Will it be destroyed after
being used, is the starting image deprecated, or is there a better starting
image given the needed configuration and software install requirements?
IBM SmartCloud Provisioning provides a component called IBM Virtual Image Library (also known as IVIL) to solve common issues that arise in large scale virtualized environments:
VIL can be integrated simply into
your virtualization infrastructure; the only requirement to start using IVIL is
the credentials required to contact the virtualization infrastructure. No changes to your current virtualization
environment are required. After credentials
are provided, IVIL can automatically determine the provenance, state, and the
content of each virtual image or virtual machine in the virtualization
environment. After the environment is
registered you will have a clear picture
of your various images, their content, history, and similarity with one
another. More important, as soon as IVIL
is used in the infrastructure, it can be used to move the images from one
hypervisor vendor to another and keep track of these migrations. To summarize, IVIL
not only keeps track of the changes of an image on one hypervisor but continues
when images are in a heterogeneous environment.
A common solution to track the contents and versioning of images is by use of a naming convention, for example, a name such as RHEL_6.1_WebSphere7.1_v2.1 implies the image is Red Hat Linux 6.1 with WebSphere 7.1 installed, and that this is version 2.1 of this image. It is feasible to use this approach with a small number of images but becomes cumbersome and confusing with anything but small examples. Basic information that is typically attempted to be conveyed includes:
Using an image naming convention can work in some cases and provide some of the needed information but it does not scale beyond a small number of simple images. To solve this, IVIL provides versioning and provenance control to understand where an image comes from:
What is provenance? Simply put provenance tracks the history of the image as it has evolved over time in the virtual environment. It tracks how the bits that make up the image came to be – through IVIL checkout operations, image clone operations, image copy operations, and so on. It is used to understand the lineage of an image from the perspective of the virtual system which might or might not match with how the user of IVIL views the image.
For example, let’s assume that you
have an image called “A”. If you decide to start this image on multiple
instances of IBM SmartCloud Provisioning or if you decide to clone this image
possibly multiple times, then IVIL will keep track of the relation between all
the created images and instances. At any time, if a security flaw is found on
A, then you can infer that the associated images and instances are likely affected
also. IVIL provides this functionality not only for a single virtual
environment, but across heterogeneous virtual environments also.
What is versioning? Versioning is the logical user-defined lineage of an image or virtual appliance; it is the way a user would think of versioning his or her image functionality, for example this is version 2 of my AccountsPayableService virtual image. When an image is available with a particular application version, the OS and libraries behind are often not important, only the application is. Is it important to know its template? Not necessarily, only the information about the OS is relevant. However, it is good to know the application version and if there is a newer version available for this image or if a new image has been released with the latest security patches. This is the versioning system in IVIL; it helps to understand if there are other versions of the application in the infrastructure, if some applications contain a patch or not.
To summarize, provenance is
oriented to infrastructure administration whereas versioning is more oriented
towards applications and workloads.
For example, let’s assume that we want to provide version 1.0 of software S as image. By default, users can decide to use software S and trigger any instance of image A. At a certain point, the version 1.0 is deprecated and we must upgrade software S to version 1.1. Unfortunately, the OS distribution must be upgraded. A solution is to reinstall the OS from scratch and install S version 1.1 on it; this new image will be called B. These images do not have any common lineage from a provenance perspective, however the content has a logical lineage to the user. Image A is the parent of image B from a versioning perspective.
It is important to understand that
an image can have only one provenance parent but can have multiple version
parents. The second claim makes sense because an image may have multiple
applications installed and thus each one may be associated to a logical
This concludes the introduction of Virtual Image Library component in IBM SmartCloud Provisioning. Next time, I will introduce the concept of similarity between images and the power that it provides in terms of debugging, infrastructure consolidation, licensing cost, and more.
I really liked the post Rapid deployments with IBM Smart Cloud Provisioning that explains how simple and fast is to deploy instances using SmartCloud Provisioning.
But once the instances are deployed the next questions are:
The solution is to integrate SmartCloud Provisioning with Tivoli Endpoint Manager (TEM) so that all the running instances will be connected to the TEM Server and managed according the configured security and corporate standards
It can be achieved exploiting the current integration between SmartCloud Provisioning and Image Construction and Composition Tool (ICCT) available in SmartCloud Provisioning version 1.2 performing the following steps:
After doing that when the extended image is launched the TEM agent will automatically start and connect to the TEM Server without requiring any user action.
Then from the TEM console you will be able to see and manage it performing actions and/or downloading fixlets.
This is just the basic integration and more advanced scenarios can be implemented, like for example exploiting the OVF parameters (as described in the topic Customizing virtual images with IBM SmartCloud Provisioning) for configuring and grouping the TEM Agents but they will be described in my next blogs !
For further information on IBM SmartCloud Provisioning and Image Construction and Composition Tool see IBM SmartCloud Provisioning Infocenter
KKBlue 270000TFU5 Tags:  virtual-infrastructure virtualization cloud-monitoring cloud 5,466 Views
As customers consolidate and virtualize application workloads along their journey toward Cloud, the cost savings that they had envisioned often prove elusive. True efficiency comes from the ability to right-size both the environment and the virtual workloads - in response to actual performance data, rather than theoretical estimates – in order to create an optimized Cloud infrastructure that runs densely enough to provide true consolidation while maintaining application service levels and room for expansion. The migration to a Cloud infrastructure, where the physical resources that we're accustomed to monitoring have been "abstracted" into pools of virtual resources, presents us with a visibility problem. It's more difficult to tweak the knobs and turn the dials to make an individual server respond to our management needs. More importantly, any changes we make at the Cloud infrastructure level have the potential to dramatically affect other workloads and services.
Join us on February 16, 2012 for Simplify Cloud Management with IBM SmartCloud Monitoring, where Ben Stern will demonstrate how our latest infrastructure management offering can help a Cloud or virtualization administrator overcome those visibility hurdles, leveraging infrastructure monitoring, health dashboards performance and capacity analytics, and policy-driven optimization of workloads and their placement in the Cloud. Most customers want a Cloud monitoring product that can be plugged into their existing data center monitoring toolset, as part of an enterprise-proven, heterogeneous solution, providing continuity of historical data and preservation of skills. You'll hear how SmartCloud Monitoring has descended from the same IBM Tivoli Monitoring DNA running in the data centers of the world's largest corporations, and quickly discover that you already know more about SmartCloud Monitoring than you realized.
Ben Stern has spent over 20 years working in the IT industry in a variety of management and technical roles within the software development organization. Prior to his current role, he was the lead for the Tivoli Service Availability and Performance Management Best Practices team. In that role, he helped define best practices for the Tivoli portfolio while working with hundreds of customers around the world. In his current role, he is focusing on Tivoli's virtualization and cloud solutions.
Link to Register
Select the session that fits your schedule.
February 16th 2011 11:00 AM to Noon EST US and Canada (GMT -05:00)
February 16th 2011 6:00 PM to 7:00 PM EST US and Canada (GMT -05:00)
I've been impressed by the speed of provisioning a set of virtual machines in just a few tens of seconds using IBM Smart Cloud Provisioning. In most cases you can get a running virtual machine in less than one minute.
The Smart Cloud Provisioning technology has been devised and particularly optimized for managing the following cloud infrastructure scenarios:
Many other workloads can be deployed and easily automated on top of Smart Cloud Provisioning. For example, traditional stateful applications can be easily deployed for simple HA solutions. Anyway you get the maximum performances from Smart Cloud Provisioning when operating in the context of the above scenarios.
To achieve such high performances Smart Cloud Provisioning has been designed focusing the attention to an optimized virtualization infrastructure based on OS streaming: no need to copy large image files over the network when provisioning.
Image copying is the single biggest bottleneck in VM provisioning today both in terms of CPU, memory, I/O and bandwidth usage. In traditional Cloud provisioning approaches all of this overhead is system resource that is just pure overhead (nobody builds a Cloud to provision systems - provisioning is an overhead that is required to have systems on which business workload is deployed, and any overhead is in conflict with the business workload).
The key element of such infrastructure are the so called ephemeral instances, that are virtual machines having no persistent state. Once they get terminated all the data associated with them is deleted as well. They are clones of a master image and these clones will have a primary virtual disk which is ephemeral: when the instance goes, so does its ephemeral storage (mechanisms exist in Smart Cloud Provisioning to provide persistence, if needed by some scenarios).
When creating a new instance, since master images are read-only resources and are replicated across the storage cluster, Smart Cloud Provisioning uses the Copy-on-Write (CoW) technology and the iSCSI protocol to stream them avoiding expensive copying. Each iSCSI session results in a valid block device to be created in the host OS. Of course each guest OS (corresponding to a given instance) requires a writable block device representing the main disk of the system. All supported hypervisors have a storage virtualization layer which includes the Copy-on-Write technology. For example, KVM's qcow2 files can be configured to implement CoW by referencing a backing storage device. VMWare has something called redo files which effectively do the same thing as well. In each case, the hypervisor can natively use the CoW file referencing the iSCSI block device to expose a virtual block device to the virtual machine. Depending on the hypervisor and guest OS this device will show up as something like /dev/sda or c:\. The CoW files are stored locally on the hypervisor's file system. When the instance is terminated, the Smart Cloud Provisioning agent will simply discard the CoW file and check if any other instances are using the same iSCSI device. If the device is no longer in use, the agent will also tear down the iSCSI device.
Thanks to the above infrastructure the action of provisioning a new virtual machine results in a very fast and reliable process that allows to create individual systems in tens of seconds and of peak requests of 1000s of systems per hour.
If you're interested in trying the Smart Cloud Provisioning product, you can download a trial version from the following link:
IBM® Tivoli® Service Automation Manager (TSAM) 7.2.2 introduces the concept of extension, a set of TSAM software components that can implement a new IT service automation solution (known as a service definition) or add capabilities to existing service definitions.
This article (Deploy a J2EE app with TSAM extensions) defines a scenario in which the desired result is to securely deploy a three-tiered enterprise application (a J2EE app) to the cloud. It demonstrates how to set up and provision extensions in TSAM as the first step to accomplishing this task. Then it describes how to standardize the three-tiered business application and provision it using standard TSAM offerings.
The second part of the article (Manage a J2EE app with TSAM extensions) focuses on the management aspects of the J2EE app. The authors explain how to add and remove application servers as the workload of the business application changes; and how to modify the security settings and why you might need to do that.