Being VMware one of most used hypervisor, it is a common scenario to have an already exiting cloud environment based on a vCenter hypervisor manager and a new cloud environment based on IBM SmartCloud Provisioning. In this scenario end user would usually starts to sue new environment but working with already existing images, containing known software as well as all required settings fulfilling company rules.
Manually running this task could not be so easy especially if new IBM SmartCloud Provisioning environment is only based on KVM hypervisor> image format conversion is required. Moreover IBM SmartCloud Provisioning requires some specific OS setting that could not be present on the already existing images.
IBM Virtual Image Library would help to overcome all these time consuming inconvenient. New 2.1 version has new feature helping end user to understand which already existing images are ready to be ported to new cloud environment, which are not ready but could be automatically modified and which are not compatible at all (i.e. because based on multiple disks).
The underlying idea is based on the concept exposed in one of my previous blog entry: “Image portability across hypervisors”. During cloud environment registration to IBM Virtual Image Library, an image analysis retrieves all needed information, so that after initial introspection end user will have a clear idea of which images can be used or not to work with IBM SmartCloud Provisioning.
Opening image properties a new tab will appear showing list of executed checks and their results with a brief explanation on what is required to be fully compatible with IBM SmartCloud Provisioning.
At this point end user knows which images can be directly used, which require some modification and which can not automatically changed to run into IBM SmartCloud Provisioning.
Next step is to check in desired images into the IBM Virtual Image Library reference repository, so that images can be either just moved to new cloud environment, or modified o be High scale low touch ready or moved to IBM SmartCloud provisioning environment and automatically modified to be ready for deployment.
IBM Virtual Image Library also checks if CloudInit is present on the image. If not it is automatically installed and activated making the image compatible with High scale low touch. In this way the new imported image is ready to be deployed using package script and add-ons using workload deployer delivered with IBM SmartCloud Provisioning.
I consider this new feature very useful to start working with new IBM SmartCloud Provisioning allowing end user to easily and quickly populate new cloud environment. Key items saving user time are:
· Having a clear list of already compatible images
· Automatically configure image to be deployed through high scale low touch hypervisor manager
· Reuse already existent images with few clicks without migrating any data
· Automatically install CloudInit allowing to use images in more complex scenario using Workload Deployer
Additional information about previous topics can be found at IBM info center pages:
IBM SmartCloud Provisioning: http://pic.dhe.ibm.com/infocenter/tivihelp/v48r1/index.jsp?topic=%2Fcom.ibm.scp.doc_2.1.0%2Fadministering%2Ft_create_images.html
Virtual Image Library: http://pic.dhe.ibm.com/infocenter/tivihelp/v48r1/topic/com.ibm.scp.doc_2.1.0/VIL/topics/checkingportability.html
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
Learn more about provisioning and orchestration capabilities
that are helping service providers to meet their growing business needs.
The solution Endpoint security for SmartCloud Provisioning v2.1
has been published on IBM Integrated Service Management Library (ISML).
The purpose of Endpoint security for SmartCloud Provisioning v2.1 is to demonstrate how IBM Tivoli Endpoint Manager can be integrated with the IBM SmartCloud Provisioning Infrastructure.
Endpoint security for SmartCloud Provisioning will generate the components required by IBM SmartCloud Provisioning 2.1 to automatically install IBM Tivoli Endpoint Manager agents when deploying virtual systems. This will allow cloud administrators to easily maintain compliance over their virtualized network. IBM SmartCloud Provisioning v2.1
as well as IBM Tivoli Endpoint Manager v8.2
need to be available. If you are participating in the IBM SmartCloud Provisioning v2.1 beta and have IBM Tivoli Endpoint Manager, consider using Endpoint security as well.
Please join the Tivoli User Community for a live Webcast and opportunity for questions, Thursday, July 19th 2012, 11:00
Your Webcast Seat Now
Cloud computing has been driving increased innovation and flexibility,
but this shift has also introduced new complexities in the world of IT and
process automation. Now multiple topics emerge on the radar of a cloud manager
which all point in the direction of easier management of the entire life cycle.
The all new IBM
SmartCloud Workload Automation provides you with a perfect entry point to the
theme of Unattended Workloads as a critical topic to make clouds more
With the new
Per-Job-Pricing, the solution is even more attractive and affordable. After
setting best practices in your organizations it’s now time to explore and learn
the “next practices” in the historic world of batch and beyond with the new IBM
SmartCloud Workload Automation. Learn More
About the Speaker:
IBM Tivoli Workload
Automation – Product Manager
Giannakopoulosis the product manager for Tivoli Workload Automation where he
has world-wide responsibility, primarily on the distributed side. He has the
working knowledge of the Development Process, technical support, HR management
and client handling. Click Here to visit
his TUC Profile
The Official Tivoli User Community is the largest online
and offline organization of Tivoli professionals in the world – home to over
160 local User Communities and dozens of virtual/global groups from 29
countries – with more than 26,000 members. The TUC community offers Users
blogs and forums for discussion and collaboration, access to the latest
whitepapers, webinars, presentations and research for Users, by Users and the
latest information on Tivoli products. The Tivoli User Community offers
the opportunity to learn and collaborate on the latest topics and issues that
matter most. Membership is complimentary. Join NOW!
There is a new white paper available on the IBM Integrated Service Management Library ( ISML ) that explains how to use Tivoli Storage Manager to back up a VMware virtual machine that was deployed by the Workload Deployer in IBM SmartCloud Provisioning version 2.1.
The white paper explains how to locate, and back up the virtual machine in VMware using IBM Tivoli Storage Manager, and how to restore the virtual machine to the Workload Deployer environment.
We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.
The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.
For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.
A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.
Cloud provisioning should combine application and image provisioning for workload optimized clouds and deliver:
· Reduced costs with automated high-scale provisioning; multiple hypervisor options and HW of choice
· Accelerated time-to-market with standardized pattern-based deployment for workload optimized cloud
· Image sprawl prevention with in-built advanced image lifecycle management capabilities
· Ease of adoption and clear roadmap to move to advanced cloud capabilities
Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:
· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms
· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly
· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities
Explore these capabilities with the new IBM SmartCloud Patch Management.
IBM is excited to announce that the IBM Tivoli Monitoring (ITM) vNext release is making Beta versions of the product available to any and all interested customers. IBM invites you to download our Beta code and assist us by evaluating the new functionality, product improvements, and code quality of IBM Tivoli Monitoring vNext.
A new ITM Community site has been defined to provide you with all the information you need to participate with us in this exciting Beta program. In this community you can download Beta drivers, see important announcements, interact directly with product developers and planners, and provide the ITM development team your valuable opinions about our planned product enhancements. Please click here and ask to join the ITM vNext Open Beta Community.
Please contact Nathan Bullock (mailto:firstname.lastname@example.org) if you have questions about the ITM vNext Open Beta Program.
IBM Smart Cloud Provisioning introduces PaaS capabilities with the possibility to create blueprints to standardize
the deployment of complex tiered applications like for example a J2ee three
tiers application made of a Http Server, an Application Server, and a DB Server,
each running on a different VM eventually configured on different network
segments. These blueprints are called patterns in IBM Workload Deployer
terminology, which is the foundation technology of SmartCloud Provisioning.
Virtual system patterns are used to define a topology middleware software configuration
to meet application requirements and you can setup that configuration using familiar
concepts and leveraging existing scripts that SmartCloud Provisioning takes care of executing when the virtual machines hosting the middleware components are deployed to the
You can use any virtual image to
build a virtual system pattern. However, in order to perform aforementioned configuration
steps you need to inject a so called activation engine, which is able to
execute the configuration scripts defined when creating the virtual system
pattern (add-on scripts and script packages). The good news is that you do not
have to do that manually: SmartCloud Provisioning provides the Image Construction and Composition
Tool (ICCT) that you can use to clone and extend your basic certified
image to make it “cloud ready”. Images extended that way are called intermediate
images. You can drop any add-on script and script package on an intermediate
image when building a pattern in the pattern editor, while you cannot for basic
images. You can still add basic images as part of your virtual system
pattern topology, but SmartCloud Provisioning cannot perform sophisticated configuration steps:
these images are more suitable to fit IaaS deployment scenarios. For these
scenarios you can still define additional network interfaces (vNIC) and you can still attach
additional disks to the virtual image instance. What you cannot achieve without
extending the image is the configuration of these things: you have to login
into the provisioned virtual machines and configure the vNICs as
well as formatting and mounting the raw disks.
Virtual Image Library is enhanced to
discover the capabilities of a virtual image and tagging it so you visually
know whether a virtual image is suitable to be included in a virtual system pattern, and you
can eventually extend it.
You can get beta versions of
SmartCloud Provisioning at this link http://www.ibm.com/developerworks/downloads/tiv/smartcloud/index.html to familiarize with virtual images construction tools and with pattern-based deployment.
A new beta drop for IBM SmartCloud Provisioning is available.
Here below you can see the list of key functionalities included:
- Live migration: move persistent images across compute nodes
- Evacuate deployed VMs running on a compute node through "pause migration"
machines take over: start managing images you have on the hypervisor
even if they've not been created by SmartCloud Provisioning
- Group level administration: share images, networks, volumes, and elastic IPs among users belonging to the same group
- Deploy persistent images in HSLT cloud group
- Virtual system support for HSLT cloud group
- Deploy Windows images in a VMware cloud group
- Deploy virtual systems pattern in a VMware cloud group
- Integration with Microsoft Active Directory
- Deploy images with multiple volumes
- Attach volumes to an instance using the new self service UI
- Attach elastic IP to an instance using the new self service UI
tool for problem determination: launch just one command to collect all
meaningful log data across your whole SmartCloud Provisioning
- Command line for Image Construction and Composition Tool: you can now integrate image extension into your development tools
- Extend Windows 2008r2 and Windows 7 images using Image Construction and Composition Tool
- Standalone installer for Virtual Image Library
checks and remediation: use Virtual Image Library to help you
understand why an image cannot be checked out into SmartCloud
Provisioning and help you fix it
If you would like to see more details, see the videos published in SmartCloud Provisioning WIKI
(you need to log in into developerWorks to see the videos)
If you would like to try out the new features without the effort of installing the product, join the community
and play with our hosted beta
If you would like to download the code, go here
new white papers are available on the IBM Integrated Service Management
Library ( ISML ) that explain how to use Tivoli Storage Manager to back
up different areas within IBM SmartCloud Provisioning.
first white paper provides information on how to use Tivoli Storage
Manager Backup-Archive client to back up and restore the boot volume of
an IBM SmartCloud Provisioning persistent virtual machine and how to
make periodic back ups of a normal volume, and select and restore a
white paper can be downloaded from the IBM Integrated Service
Management Library( ISML ) following this link -> Backing up IBM SmartCloud Provisioning's Persistent Volumes with Tivoli Storage Manager Client
second white paper provides information on how to use Tivoli Storage
Manager Backup-Archive client to back up and restore the following
components of the IBM SmartCloud Provisioning infrastructure: the
Preboot Execution Environment ( PXE ) server, the web console
configuration, and the HBase data store.
white paper can be downloaded from the IBM Integrated Service
Management Library( ISML ) following this link -> Backing up IBM SmartCloud Provisioning's Infrastructure with Tivoli Storage Manager Client
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
liked the post “Rapid deployments with IBM
SmartCloud Provisioning” that explains
how simple and fast it is to deploy instances using SmartCloud Provisioning.
But after the instances are deployed the next question is:
- How can I "easily" monitor the
performance and availability of the OS and applications of launched
solution is to integrate IBM SmartCloud Provisioning with IBM
Tivoli Monitoring (ITM) so that all the running instances will be
connected to the ITM Server and managed according the performance expectations
It can be
achieved by exploiting the current integration between IBM
SmartCloud Provisioning and the Image Construction and Composition Tool
(ICCT), available in IBM SmartCloud
Provisioning version 1.2, and performing the following steps:
- Download from the Integrated Service
Management Library (ISML) website the bundle “ICCT Bundle to deploy IBM
Tivoli Monitoring Agent for Linux OS v6.X”
- Extend an OS base image available in IBM SmartCloud Provisioning, adding the above
In this way, a new image will be
available in IBM SmartCloud
Provisioning with the ITM Agent installed and configured to connect to the IBM
Tivoli Enterprise Monitoring Server
that, when the extended image is launched, the ITM agent will automatically
start and connect to the ITM Server without requiring any user action.
Then, from the ITM console you will be able to see and monitor it and take
actions to address performance issues
If you are
interested in other extensions available on ISML this is a list of available
bundles you can download and use to extend a base image (search in ISML for “ICCT”):
- ICCT Bundle to deploy IBM Tivoli Directory
- ICCT Bundle to deploy IBM HTTP Server 7.0
- ICCT bundle to deploy IBM WAS Update
- ICCT Bundle to deploy Apache Web Server
- ICCT Bundle to deploy IBM DB2 Server 9.7
- ICCT Bundle to deploy IBM Tivoli
Monitoring Agent for WAS
- ICCT Bundle for IBM WebSphere MQ 126.96.36.199
- ICCT Bundle to deploy IBM WebSphere
Application Server Network Deployment 8.0
- ICCT Bundle to deploy IBM WebSphere
Application Server Community Edition 3.0
- ICCT Bundle to deploy IBM WebSphere MQ
- ICCT Bundle to deploy IBM
Tivoli Monitoring Agent for DB2
- ICCT Bundle to deploy IBM
Tivoli Endpoint Manager Agent
Software bundle for IBM License Metric Tool Agent for System X platform
A presentation and demo session for IBM SmartCloud Provisioning held on thursday, April 26th at 3:00 PM central european time (CET)
The presentation will be around architectural changes in IBM SmartCloud Provisioning.
The demo will be about registering High Scale Low Touch as cloud group in IBM SmartCloud Provisioning.
If you would like to join, using your web browser, connect to http://www.ibm.com/collaboration/meeting/join?id=LMT71CA
No password is required
In this new blog post I would like to
describe a root-cause detection scenario using IBM Smart Cloud
Given the ever increasing number of
virtual machine instances and VM images in a cloud ecosystem it is
becoming more and more important to track each of virtual image's
contents and configuration mainly for standardization and
Another situation where tracking this
content may also be useful is when there is the need to identify the
"drift" between a deployed virtual machine and the virtual
image that was used to create it, as for example in the scenario
As soon as a virtual machine gets
deployed from a virtual image its content will start to change; the
owner of that virtual machine begins using it by creating new files,
by using its applications, by installing/uninstalling software and so
on. Because of one of the above actions it
may happen that the system, or a specific application, may no longer
work correctly. At this point one of the things that
may be done to understand what could be the cause of such
malfunctions is to identify all the changes applied to the instance
compared to the source virtual image and look at them trying to
identify the “culprit” change in order to take appropriate
actions for repairing the situation. This is a typical scenario where the
IBM Virtual Image Library component of IBM Smart Cloud Provisioning
comes at help through its indexing
and drift analysis capabilities.
highlighted in a previous blog entry
the IBM Virtual Image Library is a tool that provides sophisticated
image-management capabilities a customer can use to tackle the
difficult issues of understanding and controlling the contents of his
virtual infrastructure. Let's see how this tool may help in
troubleshooting the scenario we have described above.
The first step is to identify the
failing virtual machine among the ones available in the IBM Virtual
Image Library repositories. The
tool continuously indexes the configured repositories of virtual
machines and images so that its data model is always up to date with
the actual content of the virtual infrastructure.
Once the virtual machine has been
identified the next step is to retrieve the virtual image from which
it has been deployed. This is another feature provided by the tool
that keeps track of the entire tree of relationships among virtual
images and virtual machines available in the environment.
The next step, if not already
previously done, is to run an indexing operation of the virtual
machine so that its content, in terms of installed applications, OS
information and file-level information can be retrieved and brought
into the tool's data model.
Once the indexing is complete the
source virtual image content and the virtual machine content can be
compared. A list of differences is presented to the user so that
he/she can review them and decide what differences may be the most
likely reason for the problem.
For example, from this report the user
may notice that a suspect application has been installed into the
virtual machine that shouldn't be there or that a configuration file,
used by the application that is malfunctioning, has been modified.
He/she can use these hints as a
starting point for troubleshooting the issue and for taking repair
The following movie demonstrates, by
means of an example, the capabilities described above.
What has been described here is just an
example of the drift analysis capabilities of the IBM Virtual Image
Library with the intent to give you an introduction to the advanced
features of this component. If you are interested in understanding
more deeply how the IBM Virtual Image Library works and to have a
summary of all of its capabilities you can take a look at the
In a dynamic cloud environment standard concepts like IP addresses and storage volumes assume a special meaning when it comes to reserving and using them regardless of the virtual machines owned by a cloud user.
The concept of Elastic IP (EIP) and Elastic Block Storage (EBS) was initially introduced by Amazon EC2 as a way to decouple the resources assigned to a cloud user from their utilization. In other words, as a cloud user you can reserve an elastic resource and assign it to one of the VMs you own, but you can also re-assign it to a different VM whenever you need (for example, whenever you need to replace your VM with a new one).
SmartCloud Provisioning offers similar capabilities exposing the concepts of Static Addresses and Persistent Volumes that can be reserved and assigned to any running VMs.
A SmartCloud Provisioning address is a statically defined address which can be dynamically bound to any instance in the cloud. In other words, a static IP address is associated with your account, not with a particular instance, and you control that address until you choose to explicitly release it.
Let’s examine more in details how it works.
When SmartCloud Provisioning creates a VM, it assigns a dynamic IP address to it, on a default management sub-network. From this point on, the system always refers to the VM using the dynamic address assigned at boot time. Nonetheless, SmartCloud Provisioning offers to cloud users the possibility of assigning a different IP address, which can be seen as a reserved and static IP.
In order to achieve this result, a centralized pool of addresses is registered by the cloud administrator and stored in a durable data service. A cloud user can then reserve one or more addresses from this pool, and can associate one of them to a specific VM he owns. Note that the cloud user does not have any clue about which address will be reserved for him; he does not even know upfront if there is any static IP address left, until he sends the reservation request.
Once a static IP has been reserved and assigned to a VM, SmartCloud Provisioning internally creates a mapping between the default dynamic address associated to the selected VM and the reserved IP address. This translates into NAT rules on the host OS's iptables to forward all traffic to the private address of that VM.
In this way you can always refer to your VM using the static address, and even if you decide to re-create the VM, you can reassign that same address to the new VM.
The address remains in your reserved list as long as you need it, and you can release it when you no longer need it.
Persistent storage is critical to any non-trivial production application. Just as Amazon's EBS has proven to be extremely valuable, SmartCloud Provisioning persistent volumes are equally powerful, offering an off-instance storage that persists independently from the life of an instance. Users can create arbitrary numbers of arbitrarily sized persistent volumes. The volumes can be dynamically attached to any VM on the cloud as long as only one instance is attached at any time.
Once attached, a persistent volume appears to the guest OS like any other raw, unformatted block device.
Each persistent volume is assigned a UUID, which can be leveraged by the cloud user to track them.
RAID sets can be easily created together to ensure each volume is hosed on a separate physical host/device.
Multiple block devices will then be exposed to the guest OS which can establish their own raided meta-devices using tools like mdadm.
Behind the scenes, these block devices are very similar to the primary boot disk of a non persistent VM. However, these are read-write iSCSI devices and directly attached to the instance without leveraging Copy-on-Write. Note persistent block storage is also hosted on the same storage cluster used for master images.
Similarly to the static IP addresses, the persistent volumes are associated with your account, not with a particular instance, and you control them until you choose to explicitly delete them.
The persistent volumes allow you to keep your data separate from the OS, offering you the possibility to move them from a VM to another whenever you need. Moreover, they offer a valid mechanism to keep your data safe when dealing with VMs that do not have a dedicated persistent storage (the non-persistent VMs managed by SmartCloud Provisioning).
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
Rethink IT. Reinvent Business
Join us for the 2012 IBMSmartCloud
Symposium event on 16-19 April 2012 in San Francisco, California. This
Symposium will help you Rethink IT and Reinvent Business.
event will introduce Cloud Computing’s disruptive potential to not only
reduce cost and complexity but reinvent the way we do business. Over the
course of four days, there will be sessions that define cloud computing
and discuss transformative benefits and challenges to consider while
sharing specific, proven patterns of success. We will provide proven
methods to get started on the Cloud journey from the up-front
investments to capacity planning. This event will cover the technology
behind private and public clouds whether you choose to build your own,
leverage prepackaged solutions or have it delivered as a service.
will explore challenges and solutions for securing, virtualization and
performance of mission critical applications as well as automating
service delivery processes for cloud environments. We will help you:
design, deploy and consume.
Use promotion code A2N for 10% off enrollment!
I really liked the post “Rapid deployments with IBM
SmartCloud Provisioning” that explains
how simple and fast it is to deploy instances using SmartCloud Provisioning.
But if is also important to have a flexible way for passing parameters during
the deployment in order to configure and/or customize the deployed instances.
IBM SmartCloud Provisioning provides in the
launch instance panel, and also using the CLI, the “user_data” text field that
can be used for this scope.
inspired to the Amazon EC2 instance metadata and here you can find an interesting
article on it: http://alestic.com/2009/06/ec2-user-data-scripts
field is a free text field so for example it can contain:
- comma separated values for
- multi-part MIME format for
complex configurations, where each part, identified by a proper
content-type, is related to a specific customization.
launched instance can easily retrieve the user data field invoking the
predefined URL http://169.254.169.254/latest/user-data
and processes it according the needs.
It can be
achieved by exploiting the current integration between IBM
SmartCloud Provisioning and the Image Construction and Composition Tool
(ICCT), available in IBM SmartCloud
Provisioning version 1.2, creating a new bundle, the User-Data consumer bundle
that contains a script that retrieves the “user-data” and process it based on his
interesting scenario is the capability of passing directly one or more scripts
to be invoked at deployment time to have a really dynamic configuration. In
this way, a new image can be configured/customized at deployment time.
If you want
to have more information on user-data capabilities and examples take a look at
the Ubuntu cloud-init component described here https://help.ubuntu.com/community/CloudInit
information about IBM SmartCloud Provisioning and Image Construction and
Composition Tool see IBM SmartCloud Provisioning
The open beta program for the upcoming IBM SmartCloud Provisioning release started:
- Freely download the code, run it unattended in your premises without the need to sign a non-disclosure agreement
- Discuss what you think about that on a dedicated forum
- Watch demonstrations of IBM SmartCloud Provisioning capabilities in the work and tell us if you like or do not like the newest features just clicking a button.
- Join our community to get early access to and provide feedback on cloud provisioning and orchestration technologies
- Stay tuned to the community to hear the latest new on available code drops and functionalities
- Play with the product in our premises joining the hosted beta. To access the hosted beta, send an email to email@example.com
In my previous blog I talked about the speed of deployment of virtual machines when using IBM Smart Cloud Provisioning. I showed that virtual machines can be started and configured in a matter of seconds and I described a little in bit in details how this could be achieved in terms of the internal infrastructure of the product.
Today I would like to talk about another aspect of IBM Smart Cloud Provisioning that contributes to make it a low-touch solution: its installation/configuration process.
The IBM Smart Cloud Provisioning solution is comprised of several kind of nodes, each of them hosting a specific set of services. Additionally, for dependability and scalability reasons most of those nodes are deployed redundantly. Some of the most important components needed to run the solution are: compute nodes, storage nodes, web server, web console, REST server, distributed lock service, distributed data store, PXE server.
For a production installation, with the exception of compute nodes and storage nodes, all those services can be installed on virtual machines and most of them can coexist on the same system. So despite the number of components that comprise the solution, a minimal working configuration requires just four physical nodes. And for demo/testing purposes everything could be installed and configured into just one single node (i.e. by virtualizing storage and compute nodes as well).
To simplify the setup process of those components and to allow for an easy extension/upgrade of the solution all the installation/configuration process is driven by a PXE server.
The idea is that once you have defined the topology of your environment you can configure it into the PXE server so that when the machines get powered on they automatically receive from the PXE server the right OS image containing the services that have to be hosted on that node (storage nodes, kernel services, compute nodes, …). This approach simplifies the management of the infrastructure and allows, in some cases, also to quickly repair it. The only component that has to be directly installed is the PXE server: for this purpose a traditional installer is provided for guiding the user through the process.
Let's consider, for example, how you can manage of one of the core elements of the solution: the compute node (i.e. the node where the virtual machines are hosted and run).
A typical administrative task in a cloud environment is to add more compute power in terms of hypervisor resources, meaning that new compute nodes have to be added.
With the above infrastructure in place, adding a new compute node to the environment is a straightforward task. Just plug your brand new machine into the environment and configure it to boot from the network; then power it on. When it starts up it will automatically get from the PXE server a pristine hypervisor image and in a matter of a few tens of minutes you have a new compute node available in your environment.
In addition to that, since compute nodes are stateless resources, this approach helps also in the cloud repair itself. When a compute node gets damaged for any reason, having it repaired is just a matter of rebooting the system (with a configured PXE boot) so that it can pick up again a pristine image.
A similar approach would apply also when you need to add a new storage node (i.e. the node that provides images and volumes) to the environment. In this case an additional step is required by configuring the MAC address of the new machine in the PXE server to have it installed as a storage node; then just power on the system. In this case the additional step is necessary because in general if a new machine is not properly assigned a given role by appropriately mapping its MAC address it will automatically be installed as a compute node.
Several variations from the basic setup described above are possible depending on the actual topology of the environment and on the kind of nodes to be installed.
Further information about IBM Smart Cloud Provisioning can be found in the IBM Smart Cloud Provisioning Wiki here:
If you're interested in trying IBM Smart Cloud Provisioning, you can download a demo version from here:
A usual adoption
pattern for cloud computing are desktops. It's really straight
forward because in general each company has standardized desktops:
only some specific version of the operating system are supported,
only specific flavours, only some applications are allowed and
typically everything is managed by the IT team.
If we think at the
benefits of adopting desktop cloud, some of them really jump
powerfully in front of the eyes: the IT team can really enforce
standardization (e.g. you can select as desktop only one of the
proposed flavours); the maintenance of the hw becomes far easier
given its consolidation; old, out-dated PCs can be used just as
connectors to the desktop hence gaining new life. From the desktop
user point of view he does no longer need to carry on some company
asset to work: healthier (no more heavy hw to take home or
travelling); safer (data is in the cloud).
But this is
nothing new, desktop cloud solution are already on the market, so
let's see if IBM SmartCloud Provisioning can bring additional
benefits to the desktop world.
What if we start
dealing with non-persistent desktop images?
images are the ones that disappear once you shut them down. You might
be asking yourself “well, that's not so clever, what about my data?
Are they lost?”. This is actually a very good point and this is the
keystone of the benefits coming with the adoption of non-persistent
The idea is that
all user data get stored into external (persistent) volumes that can
be attached/detached on demand to the non-persistent image.
If we now apply
this technology to the desktop world, it shed an interesting new
light on some typical and painful scenarios:
System or Software patching
the compliance of the desktops
changes in the amount of desktop users
In a traditional
infrastructure, when the operating system goes or is getting close to
go out of maintenance, a massive migration campaign starts: all
desktops need to be migrated. Now the migration statistically does
not go smoothly for all users and hence some of them will be stuck
even for days. If you use non-persistent images, you can easily
overcome this either creating a new master image with the new
operating system or upgrading a single instance of the image, do your
test campaign to make sure everything keeps working, then deploy it
in as many instances as the desktops you need to upgrade are, attach
to the new images the volumes with the user data and get rid of the
old images. If you leverage the incredible deployment speed of IBM
SmartCloud Provisioning, you'll have a brand new set of desktops in
Analogously we may
think about patching the operating system or a software running on
the desktop: they key idea behind this is that you're always going to
patch either the operating system or a specific software, never the
user data that keep living into separate volumes.
If we think at the
compliance aspect, remember that the user cannot save any change he
does on the boot disk of the image since nothing gets ever stored on
the disk. He is only empowered to write his own stuff on the
additional volumes. This should discourage him from even trying
installing new software or editing the operating system
configuration, since everything will be lost at the first shutdown.
I know in your
company you may have different configuration flavours of the same
operating system according to the department for which the desktop is
tailored. For example you may need to have different firewall
configurations according to the security level the end user is
entitled to. Well, with IBM SmartCloud Provisioning you can leverage
the User Data field at deployment time to specify these special
configurations. Of course this may even not be shown to the end user,
but you may mask it enlarging the list of the offering with the
specific configuration. Under the covers the instance is launched
with the proper parameters: no master image duplication, no manual
configuration; everything is automated and standardized.
optimizing resources? Desktops by their nature have all the same
operating system and configuration (at least for department),usually
they come also with the same applications installed on top. If you
deal with non-persistent images you are just saving lots of
duplicated, useless copies of the same operating systems and software
on the disk. Then, if you think that once the desktop is shut down,
its resources are released (i.e. cores and memory), you can better
optimize your hardware using those resources for other
applications/users (they may even be server application or desktops
for users residing in a different timezone).
coming on board? A project outsourced to an external work-force?
You may want to
have this people productive more than immediately. With IBM
SmartCloud Provisioning, their desktops will be up and running in
information about IBM SmartCloud Provisioning can be found in IBM
SmartCloud Provisioning WIKI and in IBM
SmartCloud Provisioning infocenter
See IBM SmartCloud Provisioning working
in a recorded
The fix is downloadable from Fix Central
and it is identified as 1.2.0-TIV-ISCP-IF0001
It addresses the following problems:
The Activation Engine inside images built with Image Construction and Composition Tool hangs on configNTP.
The size of images checked out from Virtual Image Library in IBM SmartCloud Provisioning is zero.
Blank instance row in the webconsole
Instances running on the hypervisor are not shown in the output of iaas-describe-instances
Capturing the instance hosting Virtual Image Library fails
Unable to capture images bigger than 40GB
After the iFix installation the IHC component will be upgraded form version 0.20.2 to 1.0.0
For further details read the readme file associated with the interim fix
We're pleased to make available as a beta Service Health for IBM SmartCloud Provisioning. As this is a beta we welcome any and all feedback.
Service Health (Beta) for IBM® SmartCloud Provisioning provides prebuilt integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring. This solution allows you to easily monitor your IBM SmartCloud Provisioning infrastructure to identify and react to issues in your environment.
This solution is available via the IBM Integrated Service Management Library( ISML ). You can find it here -> Service Health for IBM SmartCloud Provisioning
. Please use the "Comment or Review" link on that page to post feedback. You may also use the "Contact Provider" link as well.
There is a brand new demo for IBM SmartCloud Provisioning 1.2.
It is launchpad based hence allowing you to dive into various capabilities individually with a short and quick overview.
It covers the main IBM SmartCloud Provisioning capabilities:
- fast virtual image provisioning
- easily extending your cloud infrastructure
- fault tolerance
- low touch
- image management:
- controlling image sprawl
- drift analysis
- image search
SmartCloud Provisioning is designed to minimize the use of a centralized “command and control” approach, in favor of scale out management, where endpoints can participate in management activities and do not depend on a single configuration management database.
This allows SmartCloud Provisioning to handle multiple provisioning tasks in parallel, across an unlimited number of servers.
Cloud users can request deployments of virtual machines and have access to the provisioned systems in very few seconds, thanks to the parallel and distributed processing that happens transparently and under the covers.
Let’s drill down into the details about this distributed management approach.
SmartCloud Provisioning internally uses a peer to peer (P2P) messaging infrastructure to pass provisioning and management messages between agents, which contribute to the decentralized control.
Agents are installed on the compute nodes (i.e. the hypervisors) as well as on the storage nodes, where images and volumes reside.
The P2P connections between agents not only allow self-monitoring of their health in order to implement a low-touch management infrastructure, but also allow orchestrating the communications to achieve an effective load distribution and decentralized management of the requests performed by cloud users.
The P2P communication overlay is backed by a distributed lock service, which is based on ZooKeeper.
ZooKeeper is a distributed, open-source coordination service for distributed applications, which exposes a simple set of primitives that distributed applications can build upon to implement higher level services for synchronization, configuration maintenance, and groups and naming. It is designed to be easy to program, and uses a data model styled after the familiar directory tree structure of file systems.
Like the distributed processes it coordinates, ZooKeeper itself is intended to be replicated over a set of servers that must all know about each other. They maintain an in-memory image of state, along with a transaction logs and snapshots in a persistent store.
SmartCloud Provisioning agents connect to a single ZooKeeper server. Each agent maintains a TCP connection with the Zookeeper server, through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the agent will connect to a different server.
When a deployment request is received by SmartCloud Provisioning, the request is processed by the Web Services layer, passed to the management infrastructure, and managed by the agents and the ZooKeeper services.
The following steps describe in more details the internal communications, as depicted in figure 1 below.
This processing happens in a transparent way for the end user, who just sees the deployment request being served in few seconds.
- The Web Services layer takes a deployment request in charge (e.g. deploy 50 “Large” instances of image “LOB123-RHEL 6.0”), and triggers a first interaction with the ZooKeeper server to ask which agent in the compute nodes layer can take this request into account.
- The ZooKeeper server selects one of the available leaders in the compute nodes layer and returns this information back to the web service layer. The role of the selected leader will be to initiate an internal hand-shaking among the compute nodes agents to process the incoming request.
- The Web Service layer receives the information about which agent to contact, and opens a connection to that agent, passing the deployment request details.
- The selected agent takes care of the request and starts a “discussion” phase with all the other leaders (one for each rack) in order to distribute the load of the incoming request among all the agents that could provide resources to fulfill it. This happens leveraging the P2P connection between agents.
- Inside each rack, the leader triggers a parallel P2P interaction with all the agents on all the compute nodes included in that rack, to understand which agent can serve a portion of the incoming request. Each agent having enough free resources to serve “Large” instances answers the request coming from its leader, so that, at the end of the hand-shaking process each leader knows which portion of the incoming request can be processed by which agent.
- At this point, each involved agent knows which part of the incoming request it is supposed to process. To start the real deployment step, the agent asks the ZooKeeper server where to find the “LOB123-RHEL 6.0” image to be deployed, according to the incoming request. The ZooKeeper again answers the incoming requests by providing one of the available agent leaders on the storage nodes layer.
- When an agent receives back the information about which storage node to connect to, it opens a P2P connection with the related agent and asks for the image it needs to fulfill the deployment request.
- The storage node agent leader starts in turn a P2P communication with the other leaders asking for the selected image. Each leader inside its managed rack triggers other P2P connections to ask each managed agent if it has a copy of the requested image.
- The storage leader initiating the request collects back all the details about agents having a copy of the requested image and selects at least two of them (default redundancy required by SmartCloud Provisioning), returning the information to the calling compute-node agent. The compute-node agent at this point can access the image and starts the deployment of VMs, according to its capacity and to the amount of work it offered to serve.
As I said, this processing happens under the covers in a very fast way and the user does not have to worry about any of the steps above.
This allows reaching high levels of parallelism, decentralized management, as well as scale-out capabilities that can be easily reached by increasing the number of servers.
If you're interested in trying the SmartCloud Provisioning distributed management capabilities, you can download a trial version from the following link:
SmartCloud Provisioning is an infrastructure-as-a-service cloud able
to work with different types of hypervisors. You can easily install
and configure new compute nodes to run your virtual images on KVM,
VMWare and Xen.
is a very interesting sentence, and it seems to be very useful. First
time I read it, I thought: “Do I need to have 3 different images?
Can I have same image running on any hypervisor?” Answers are yes
to both question. Depending on how you would run your image you could
need have different images for different hypervisors or just use an
single image regardless the underlying hypervisor.
going deeper on how IBM SmartCloud Provisioning deploys virtual
images, I would discuss different hypervisors. Each of them has its
own peculiarity, allowing you to leverage different features,
implemented in different ways. This lead us to deal with different
hypervisor limitations. Here the following are most common
and Xen are able to manage SCSI devices, but not KVM
and Xen can use virtio drivers but not VMWare
uses a proprietary agent inside the guest OS (VMWare tools) which
does not work with Xen or KVM
uses vmdk file format, which is a proprietary format
of these differences can prevent an image from working on any
hypervisor. It is clear that if you do not pay attention on how you
create your base images, you might need different images for the
different hypervisors. So next step is understanding how we should
create a “magic image” able to run everywhere.
point is to figure out list of similarity between different
any hypervisor type support the raw format.
type: any hypervisor type supports ide device.
configuration: hypervisors do not require specific configurations,
but the manager could.
with IBM SmartCloud Provisioning you will not have any issue from any
of the previous points. In fact before creating a base image you
should just follow a few rules to ensure portability.
requires specific OS configuration regardless the underlying
hypervisor. You can find all needed information on how to build your
image at the info center site:
is important to use raw format, for the initial image. Here we have
an interesting problem: how to create a VMWare image in raw format.
The answer is very simple: we are creating a fully portable image, so
you can use KVM to build such master image and than run it
this point we have our raw image, fulfilling all requirements from
the hypervisor manager. What is next step? You need to register it
into IBM SmartCloud Provisioning. To do that you can use either the
administrative UI or CLI. Regardless the user interface you are using
just remember to use following settings during registration:
not enable virtio
finally have a fully portable image. IBM SmartCloud Provisioning will
decide by itself which will be the most appropriate compute node to
run your “magic image”.
though the described process is very easy, there could be some cases
where you cannot follow it. This is just in case you already have
your image in a proprietary format, and you need to use them. In this
case you have Virtual Image Library helping you. It is a very useful
IBM SmartCloud Provisioning component able to manage images
federating different hypervisors. It has capability to check image
into its own repository so that you can then check them out to a
different federated virtualization environment. And during this
process it will convert the image format for you.
it you will be able for example to check in a VMWare image and then
check the same image out to IBM Smartcloud Provisioning. Resulting
in a raw format image. Next interesting question is if it will run or
not. The answer strongly depends on compute node type and image
configuration. For what previously discussed, you should care about
configuration: as I said IBM SmartCloud Provisioning requires images
to have some OS configuration. To have final working image you must
ensure that the initial VMWare image has all required configuration
before stating importing it into Virtual Image Library. Otherwise
it will not able to start (for example if image does not have DHCP
configured, it will never get a valid IP)
type: if you only have KVM compute node within your IBM SmartCloud
Provisioning an image using SCSI device will not be able to run at
all. To have it running you must have at least one VMWare compute
node. If initial image is using an ide device, than you will not
have any trouble.
addition to image format conversion Virtual image Library is also
able to modify Windows device driver. In the process of moving an
image from VMWare to Virtual Image Library and than to IBM SmartCloud
Provisioning, the application change Windows configuration allowing
it to run into any hypervisor.
information about previous topics can be found at IBM info center
Cloud systems have made a huge
improvement in terms of tracking and performance. In “Rapid deployments with
IBM Smart Cloud Provisioning” blog, we have shown that virtual machines or
appliances can be started and configured in a matter of seconds. It has never
been so easy to create a virtual machine (VM), install software, and configure
middleware. However, with great power comes great responsibility… it is now
possible to create a VM, but what is its lifecycle? Will it be destroyed after
being used, is the starting image deprecated, or is there a better starting
image given the needed configuration and software install requirements?
IBM SmartCloud Provisioning provides
a component called IBM Virtual Image Library (also known as IVIL) to solve
common issues that arise in large scale virtualized environments:
tracking: Where are my images? How old are they? How are they related?
and security: What is in the images? Are they secure? What is the software
Are there images redundancies? Is
there any difference between two images?
list goes on
VIL can be integrated simply into
your virtualization infrastructure; the only requirement to start using IVIL is
the credentials required to contact the virtualization infrastructure. No changes to your current virtualization
environment are required. After credentials
are provided, IVIL can automatically determine the provenance, state, and the
content of each virtual image or virtual machine in the virtualization
environment. After the environment is
registered you will have a clear picture
of your various images, their content, history, and similarity with one
another. More important, as soon as IVIL
is used in the infrastructure, it can be used to move the images from one
hypervisor vendor to another and keep track of these migrations. To summarize, IVIL
not only keeps track of the changes of an image on one hypervisor but continues
when images are in a heterogeneous environment.
A common solution to track the
contents and versioning of images is by use of a naming convention, for
example, a name such as RHEL_6.1_WebSphere7.1_v2.1 implies the image is Red Hat
Linux 6.1 with WebSphere 7.1 installed, and that this is version 2.1 of this
image. It is feasible to use this approach with a small number of images but
becomes cumbersome and confusing with anything but small examples. Basic information that is typically attempted
to be conveyed includes:
is the OS and OS version?
applications are installed and their versions?
the latest patches and updates installed?
does this image relate to other versions of the same or similar images?
Using an image naming convention
can work in some cases and provide some of the needed information but it does
not scale beyond a small number of simple images. To solve this, IVIL provides versioning and
provenance control to understand where an image comes from:
What is provenance? Simply put provenance tracks the history of the
image as it has evolved over time in the virtual environment. It tracks how the bits that make up the image
came to be – through IVIL checkout operations, image clone operations, image
copy operations, and so on. It is used
to understand the lineage of an image from the perspective of the virtual
system which might or might not match with how the user of IVIL views the
For example, let’s assume that you
have an image called “A”. If you decide to start this image on multiple
instances of IBM SmartCloud Provisioning or if you decide to clone this image
possibly multiple times, then IVIL will keep track of the relation between all
the created images and instances. At any time, if a security flaw is found on
A, then you can infer that the associated images and instances are likely affected
also. IVIL provides this functionality not only for a single virtual
environment, but across heterogeneous virtual environments also.
What is versioning? Versioning is the logical user-defined
lineage of an image or virtual appliance; it is the way a user would think of
versioning his or her image functionality, for example this is version 2 of my
AccountsPayableService virtual image.
When an image is available with a particular application version, the OS
and libraries behind are often not important, only the application is. Is it
important to know its template? Not necessarily, only the information about the
OS is relevant. However, it is good to know the application version and if
there is a newer version available for this image or if a new image has been
released with the latest security patches. This is the versioning system in IVIL;
it helps to understand if there are other versions of the application in the
infrastructure, if some applications contain a patch or not.
To summarize, provenance is
oriented to infrastructure administration whereas versioning is more oriented
towards applications and workloads.
For example, let’s assume that we
want to provide version 1.0 of software S as image. By default, users can
decide to use software S and trigger any instance of image A. At a certain
point, the version 1.0 is deprecated and we must upgrade software S to version
1.1. Unfortunately, the OS distribution must be upgraded. A solution is to
reinstall the OS from scratch and install S version 1.1 on it; this new image
will be called B. These images do not have any common lineage from a provenance
perspective, however the content has a logical lineage to the user. Image A is
the parent of image B from a versioning perspective.
It is important to understand that
an image can have only one provenance parent but can have multiple version
parents. The second claim makes sense because an image may have multiple
applications installed and thus each one may be associated to a logical
This concludes the introduction of
Virtual Image Library component in IBM SmartCloud Provisioning. Next time, I
will introduce the concept of similarity between images and the power that it
provides in terms of debugging, infrastructure consolidation, licensing cost, and
I really liked the post Rapid deployments with IBM Smart Cloud Provisioning
that explains how simple and fast is to deploy instances using SmartCloud Provisioning.
But once the instances are deployed the next questions are:
- How can I "easily" manage them from patch management point of view ?
- How can I "ensure" that they satisfy my corporate and security standards ?
The solution is to integrate SmartCloud Provisioning with Tivoli Endpoint Manager (TEM) so that all the running instances will be connected to the TEM Server and managed according the configured security and corporate standards
It can be achieved exploiting the current integration between SmartCloud Provisioning and Image Construction and Composition Tool (ICCT) available in SmartCloud Provisioning version 1.2 performing the following steps:
- Using ICCT create a new bundle, the "TEM Agent bundle", that contains:
Extend an OS base image available in SmartCloud Provisioning adding the "TEM Agent bundle".
- TEM Agent installation package
- TEM masthead file.
this file is the digitally signed file that contains the information of where the TEM server is located
- a script that installs the TEM Agent and copies the TEM masthead file in the proper directory (ex: for Linux is /etc/opt/BESClient )
In this way a new image will be available in SmartCloud Provisioning with the TEM agent installed and configured to connect to the TEM Server.
After doing that when the extended image is launched the TEM agent will automatically start and connect to the TEM Server without requiring any user action.
Then from the TEM console you will be able to see and manage it performing actions and/or downloading fixlets.
This is just the basic integration and more advanced scenarios can be implemented, like for example exploiting the OVF parameters (as described in the topic Customizing virtual images with IBM SmartCloud Provisioning
) for configuring and grouping the TEM Agents but they will be described in my next blogs !
For further information on IBM SmartCloud Provisioning and Image Construction and Composition Tool see IBM SmartCloud Provisioning Infocenter
As customers consolidate and virtualize application workloads along their journey toward Cloud, the cost savings that they had envisioned often prove elusive. True efficiency comes from the ability to right-size both the environment and the virtual workloads - in response to actual performance data, rather than theoretical estimates – in order to create an optimized Cloud infrastructure that runs densely enough to provide true consolidation while maintaining application service levels and room for expansion. The migration to a Cloud infrastructure, where the physical resources that we're accustomed to monitoring have been "abstracted" into pools of virtual resources, presents us with a visibility problem. It's more difficult to tweak the knobs and turn the dials to make an individual server respond to our management needs. More importantly, any changes we make at the Cloud infrastructure level have the potential to dramatically affect other workloads and services.
Join us on February 16, 2012 for Simplify Cloud Management with IBM SmartCloud Monitoring, where Ben Stern will demonstrate how our latest infrastructure management offering can help a Cloud or virtualization administrator overcome those visibility hurdles, leveraging infrastructure monitoring, health dashboards performance and capacity analytics, and policy-driven optimization of workloads and their placement in the Cloud. Most customers want a Cloud monitoring product that can be plugged into their existing data center monitoring toolset, as part of an enterprise-proven, heterogeneous solution, providing continuity of historical data and preservation of skills. You'll hear how SmartCloud Monitoring has descended from the same IBM Tivoli Monitoring DNA running in the data centers of the world's largest corporations, and quickly discover that you already know more about SmartCloud Monitoring than you realized.
Ben Stern has spent over 20 years working in the IT industry in a variety of management and technical roles within the software development organization. Prior to his current role, he was the lead for the Tivoli Service Availability and Performance Management Best Practices team. In that role, he helped define best practices for the Tivoli portfolio while working with hundreds of customers around the world. In his current role, he is focusing on Tivoli's virtualization and cloud solutions.
Link to Register
Select the session that fits your schedule.
February 16th 2011 11:00 AM to Noon EST US and Canada (GMT -05:00)https://de202.centra.com:443/Reg/main/000000605ae4440134e542dc87007e8e/en_US
February 16th 2011 6:00 PM to 7:00 PM EST US and Canada (GMT -05:00)
I've been impressed by the speed of
provisioning a set of virtual machines in just a few tens of seconds
using IBM Smart Cloud Provisioning. In most cases you can get a
running virtual machine in less than one minute.
The Smart Cloud Provisioning technology
has been devised and particularly optimized for managing the
following cloud infrastructure scenarios:
Infrastructure composed of
High level of standardization with
a relative small set of master images used to provision many
instances from the same image
Typical life cycle of the
provisioned resources with short average time of life of provisioned
Many other workloads can be deployed
and easily automated on top of Smart Cloud Provisioning. For example,
traditional stateful applications can be easily deployed for simple
HA solutions. Anyway you get the maximum performances from Smart
Cloud Provisioning when operating in the context of the above
To achieve such high performances Smart
Cloud Provisioning has been designed focusing the attention to an
optimized virtualization infrastructure based on OS streaming: no
need to copy large image files over the network when provisioning.
Image copying is the single biggest
bottleneck in VM provisioning today both in terms of CPU, memory, I/O
and bandwidth usage. In traditional Cloud provisioning approaches all
of this overhead is system resource that is just pure overhead
(nobody builds a Cloud to provision systems - provisioning is an
overhead that is required to have systems on which business workload
is deployed, and any overhead is in conflict with the business
The key element of such infrastructure
are the so called ephemeral instances, that are virtual machines
having no persistent state. Once they get terminated all the data
associated with them is deleted as well. They are clones of a master
image and these clones will have a primary virtual disk which is
ephemeral: when the instance goes, so does its ephemeral storage
(mechanisms exist in Smart Cloud Provisioning to provide persistence,
if needed by some scenarios).
When creating a new instance, since
master images are read-only resources and are replicated across the
storage cluster, Smart Cloud Provisioning uses the Copy-on-Write
(CoW) technology and the iSCSI protocol to stream them avoiding
expensive copying. Each iSCSI session results in a valid block device
to be created in the host OS. Of course each guest OS (corresponding
to a given instance) requires a writable block device representing
the main disk of the system. All supported hypervisors have a storage
virtualization layer which includes the Copy-on-Write technology. For
example, KVM's qcow2 files can be configured to implement CoW
by referencing a backing storage device. VMWare has something called
redo files which effectively do the same thing as well. In each case,
the hypervisor can natively use the CoW file referencing the iSCSI
block device to expose a virtual block device to the virtual machine. Depending on the hypervisor and guest
OS this device will show up as something like /dev/sda or c:\. The CoW files are stored locally on the
hypervisor's file system. When the instance is terminated, the
Smart Cloud Provisioning agent will simply discard the CoW file and
check if any other instances are using the same iSCSI device. If the
device is no longer in use, the agent will also tear down the iSCSI
Thanks to the above infrastructure the
action of provisioning a new virtual machine results in a very fast
and reliable process that allows to create individual systems in tens
of seconds and of peak requests of 1000s of systems per hour.
If you're interested in trying the
Smart Cloud Provisioning product, you can download a trial version
from the following link:
IBM® Tivoli® Service Automation Manager (TSAM) 7.2.2 introduces the concept of extension, a set of TSAM software components that can implement a new IT service automation solution (known as a service definition) or add capabilities to existing service definitions.
This article (Deploy a J2EE app with TSAM extensions) defines a scenario in which the desired result is to securely deploy a three-tiered enterprise application (a J2EE app) to the cloud. It demonstrates how to set up and provision extensions in TSAM as the first step to accomplishing this task. Then it describes how to standardize the three-tiered business application and provision it using standard TSAM offerings.
The second part of the article (Manage a J2EE app with TSAM extensions
) focuses on the management aspects of the J2EE app. The authors explain how to add and remove application servers as the workload of the business application changes; and how to modify the security settings and why you might need to do that.