SmartCloud Provisioning is an infrastructure-as-a-service cloud able
to work with different types of hypervisors. You can easily install
and configure new compute nodes to run your virtual images on KVM,
VMWare and Xen.
is a very interesting sentence, and it seems to be very useful. First
time I read it, I thought: “Do I need to have 3 different images?
Can I have same image running on any hypervisor?” Answers are yes
to both question. Depending on how you would run your image you could
need have different images for different hypervisors or just use an
single image regardless the underlying hypervisor.
going deeper on how IBM SmartCloud Provisioning deploys virtual
images, I would discuss different hypervisors. Each of them has its
own peculiarity, allowing you to leverage different features,
implemented in different ways. This lead us to deal with different
hypervisor limitations. Here the following are most common
and Xen are able to manage SCSI devices, but not KVM
and Xen can use virtio drivers but not VMWare
uses a proprietary agent inside the guest OS (VMWare tools) which
does not work with Xen or KVM
uses vmdk file format, which is a proprietary format
of these differences can prevent an image from working on any
hypervisor. It is clear that if you do not pay attention on how you
create your base images, you might need different images for the
different hypervisors. So next step is understanding how we should
create a “magic image” able to run everywhere.
point is to figure out list of similarity between different
any hypervisor type support the raw format.
type: any hypervisor type supports ide device.
configuration: hypervisors do not require specific configurations,
but the manager could.
with IBM SmartCloud Provisioning you will not have any issue from any
of the previous points. In fact before creating a base image you
should just follow a few rules to ensure portability.
requires specific OS configuration regardless the underlying
hypervisor. You can find all needed information on how to build your
image at the info center site:
is important to use raw format, for the initial image. Here we have
an interesting problem: how to create a VMWare image in raw format.
The answer is very simple: we are creating a fully portable image, so
you can use KVM to build such master image and than run it
this point we have our raw image, fulfilling all requirements from
the hypervisor manager. What is next step? You need to register it
into IBM SmartCloud Provisioning. To do that you can use either the
administrative UI or CLI. Regardless the user interface you are using
just remember to use following settings during registration:
not enable virtio
finally have a fully portable image. IBM SmartCloud Provisioning will
decide by itself which will be the most appropriate compute node to
run your “magic image”.
though the described process is very easy, there could be some cases
where you cannot follow it. This is just in case you already have
your image in a proprietary format, and you need to use them. In this
case you have Virtual Image Library helping you. It is a very useful
IBM SmartCloud Provisioning component able to manage images
federating different hypervisors. It has capability to check image
into its own repository so that you can then check them out to a
different federated virtualization environment. And during this
process it will convert the image format for you.
it you will be able for example to check in a VMWare image and then
check the same image out to IBM Smartcloud Provisioning. Resulting
in a raw format image. Next interesting question is if it will run or
not. The answer strongly depends on compute node type and image
configuration. For what previously discussed, you should care about
configuration: as I said IBM SmartCloud Provisioning requires images
to have some OS configuration. To have final working image you must
ensure that the initial VMWare image has all required configuration
before stating importing it into Virtual Image Library. Otherwise
it will not able to start (for example if image does not have DHCP
configured, it will never get a valid IP)
type: if you only have KVM compute node within your IBM SmartCloud
Provisioning an image using SCSI device will not be able to run at
all. To have it running you must have at least one VMWare compute
node. If initial image is using an ide device, than you will not
have any trouble.
addition to image format conversion Virtual image Library is also
able to modify Windows device driver. In the process of moving an
image from VMWare to Virtual Image Library and than to IBM SmartCloud
Provisioning, the application change Windows configuration allowing
it to run into any hypervisor.
information about previous topics can be found at IBM info center
Cloud systems have made a huge
improvement in terms of tracking and performance. In “Rapid deployments with
IBM Smart Cloud Provisioning” blog, we have shown that virtual machines or
appliances can be started and configured in a matter of seconds. It has never
been so easy to create a virtual machine (VM), install software, and configure
middleware. However, with great power comes great responsibility… it is now
possible to create a VM, but what is its lifecycle? Will it be destroyed after
being used, is the starting image deprecated, or is there a better starting
image given the needed configuration and software install requirements?
IBM SmartCloud Provisioning provides
a component called IBM Virtual Image Library (also known as IVIL) to solve
common issues that arise in large scale virtualized environments:
tracking: Where are my images? How old are they? How are they related?
and security: What is in the images? Are they secure? What is the software
Are there images redundancies? Is
there any difference between two images?
list goes on
VIL can be integrated simply into
your virtualization infrastructure; the only requirement to start using IVIL is
the credentials required to contact the virtualization infrastructure. No changes to your current virtualization
environment are required. After credentials
are provided, IVIL can automatically determine the provenance, state, and the
content of each virtual image or virtual machine in the virtualization
environment. After the environment is
registered you will have a clear picture
of your various images, their content, history, and similarity with one
another. More important, as soon as IVIL
is used in the infrastructure, it can be used to move the images from one
hypervisor vendor to another and keep track of these migrations. To summarize, IVIL
not only keeps track of the changes of an image on one hypervisor but continues
when images are in a heterogeneous environment.
A common solution to track the
contents and versioning of images is by use of a naming convention, for
example, a name such as RHEL_6.1_WebSphere7.1_v2.1 implies the image is Red Hat
Linux 6.1 with WebSphere 7.1 installed, and that this is version 2.1 of this
image. It is feasible to use this approach with a small number of images but
becomes cumbersome and confusing with anything but small examples. Basic information that is typically attempted
to be conveyed includes:
is the OS and OS version?
applications are installed and their versions?
the latest patches and updates installed?
does this image relate to other versions of the same or similar images?
Using an image naming convention
can work in some cases and provide some of the needed information but it does
not scale beyond a small number of simple images. To solve this, IVIL provides versioning and
provenance control to understand where an image comes from:
What is provenance? Simply put provenance tracks the history of the
image as it has evolved over time in the virtual environment. It tracks how the bits that make up the image
came to be – through IVIL checkout operations, image clone operations, image
copy operations, and so on. It is used
to understand the lineage of an image from the perspective of the virtual
system which might or might not match with how the user of IVIL views the
For example, let’s assume that you
have an image called “A”. If you decide to start this image on multiple
instances of IBM SmartCloud Provisioning or if you decide to clone this image
possibly multiple times, then IVIL will keep track of the relation between all
the created images and instances. At any time, if a security flaw is found on
A, then you can infer that the associated images and instances are likely affected
also. IVIL provides this functionality not only for a single virtual
environment, but across heterogeneous virtual environments also.
What is versioning? Versioning is the logical user-defined
lineage of an image or virtual appliance; it is the way a user would think of
versioning his or her image functionality, for example this is version 2 of my
AccountsPayableService virtual image.
When an image is available with a particular application version, the OS
and libraries behind are often not important, only the application is. Is it
important to know its template? Not necessarily, only the information about the
OS is relevant. However, it is good to know the application version and if
there is a newer version available for this image or if a new image has been
released with the latest security patches. This is the versioning system in IVIL;
it helps to understand if there are other versions of the application in the
infrastructure, if some applications contain a patch or not.
To summarize, provenance is
oriented to infrastructure administration whereas versioning is more oriented
towards applications and workloads.
For example, let’s assume that we
want to provide version 1.0 of software S as image. By default, users can
decide to use software S and trigger any instance of image A. At a certain
point, the version 1.0 is deprecated and we must upgrade software S to version
1.1. Unfortunately, the OS distribution must be upgraded. A solution is to
reinstall the OS from scratch and install S version 1.1 on it; this new image
will be called B. These images do not have any common lineage from a provenance
perspective, however the content has a logical lineage to the user. Image A is
the parent of image B from a versioning perspective.
It is important to understand that
an image can have only one provenance parent but can have multiple version
parents. The second claim makes sense because an image may have multiple
applications installed and thus each one may be associated to a logical
This concludes the introduction of
Virtual Image Library component in IBM SmartCloud Provisioning. Next time, I
will introduce the concept of similarity between images and the power that it
provides in terms of debugging, infrastructure consolidation, licensing cost, and
I really liked the post Rapid deployments with IBM Smart Cloud Provisioning
that explains how simple and fast is to deploy instances using SmartCloud Provisioning.
But once the instances are deployed the next questions are:
- How can I "easily" manage them from patch management point of view ?
- How can I "ensure" that they satisfy my corporate and security standards ?
The solution is to integrate SmartCloud Provisioning with Tivoli Endpoint Manager (TEM) so that all the running instances will be connected to the TEM Server and managed according the configured security and corporate standards
It can be achieved exploiting the current integration between SmartCloud Provisioning and Image Construction and Composition Tool (ICCT) available in SmartCloud Provisioning version 1.2 performing the following steps:
- Using ICCT create a new bundle, the "TEM Agent bundle", that contains:
Extend an OS base image available in SmartCloud Provisioning adding the "TEM Agent bundle".
- TEM Agent installation package
- TEM masthead file.
this file is the digitally signed file that contains the information of where the TEM server is located
- a script that installs the TEM Agent and copies the TEM masthead file in the proper directory (ex: for Linux is /etc/opt/BESClient )
In this way a new image will be available in SmartCloud Provisioning with the TEM agent installed and configured to connect to the TEM Server.
After doing that when the extended image is launched the TEM agent will automatically start and connect to the TEM Server without requiring any user action.
Then from the TEM console you will be able to see and manage it performing actions and/or downloading fixlets.
This is just the basic integration and more advanced scenarios can be implemented, like for example exploiting the OVF parameters (as described in the topic Customizing virtual images with IBM SmartCloud Provisioning
) for configuring and grouping the TEM Agents but they will be described in my next blogs !
For further information on IBM SmartCloud Provisioning and Image Construction and Composition Tool see IBM SmartCloud Provisioning Infocenter
As customers consolidate and virtualize application workloads along their journey toward Cloud, the cost savings that they had envisioned often prove elusive. True efficiency comes from the ability to right-size both the environment and the virtual workloads - in response to actual performance data, rather than theoretical estimates – in order to create an optimized Cloud infrastructure that runs densely enough to provide true consolidation while maintaining application service levels and room for expansion. The migration to a Cloud infrastructure, where the physical resources that we're accustomed to monitoring have been "abstracted" into pools of virtual resources, presents us with a visibility problem. It's more difficult to tweak the knobs and turn the dials to make an individual server respond to our management needs. More importantly, any changes we make at the Cloud infrastructure level have the potential to dramatically affect other workloads and services.
Join us on February 16, 2012 for Simplify Cloud Management with IBM SmartCloud Monitoring, where Ben Stern will demonstrate how our latest infrastructure management offering can help a Cloud or virtualization administrator overcome those visibility hurdles, leveraging infrastructure monitoring, health dashboards performance and capacity analytics, and policy-driven optimization of workloads and their placement in the Cloud. Most customers want a Cloud monitoring product that can be plugged into their existing data center monitoring toolset, as part of an enterprise-proven, heterogeneous solution, providing continuity of historical data and preservation of skills. You'll hear how SmartCloud Monitoring has descended from the same IBM Tivoli Monitoring DNA running in the data centers of the world's largest corporations, and quickly discover that you already know more about SmartCloud Monitoring than you realized.
Ben Stern has spent over 20 years working in the IT industry in a variety of management and technical roles within the software development organization. Prior to his current role, he was the lead for the Tivoli Service Availability and Performance Management Best Practices team. In that role, he helped define best practices for the Tivoli portfolio while working with hundreds of customers around the world. In his current role, he is focusing on Tivoli's virtualization and cloud solutions.
Link to Register
Select the session that fits your schedule.
February 16th 2011 11:00 AM to Noon EST US and Canada (GMT -05:00)https://de202.centra.com:443/Reg/main/000000605ae4440134e542dc87007e8e/en_US
February 16th 2011 6:00 PM to 7:00 PM EST US and Canada (GMT -05:00)
I've been impressed by the speed of
provisioning a set of virtual machines in just a few tens of seconds
using IBM Smart Cloud Provisioning. In most cases you can get a
running virtual machine in less than one minute.
The Smart Cloud Provisioning technology
has been devised and particularly optimized for managing the
following cloud infrastructure scenarios:
Infrastructure composed of
High level of standardization with
a relative small set of master images used to provision many
instances from the same image
Typical life cycle of the
provisioned resources with short average time of life of provisioned
Many other workloads can be deployed
and easily automated on top of Smart Cloud Provisioning. For example,
traditional stateful applications can be easily deployed for simple
HA solutions. Anyway you get the maximum performances from Smart
Cloud Provisioning when operating in the context of the above
To achieve such high performances Smart
Cloud Provisioning has been designed focusing the attention to an
optimized virtualization infrastructure based on OS streaming: no
need to copy large image files over the network when provisioning.
Image copying is the single biggest
bottleneck in VM provisioning today both in terms of CPU, memory, I/O
and bandwidth usage. In traditional Cloud provisioning approaches all
of this overhead is system resource that is just pure overhead
(nobody builds a Cloud to provision systems - provisioning is an
overhead that is required to have systems on which business workload
is deployed, and any overhead is in conflict with the business
The key element of such infrastructure
are the so called ephemeral instances, that are virtual machines
having no persistent state. Once they get terminated all the data
associated with them is deleted as well. They are clones of a master
image and these clones will have a primary virtual disk which is
ephemeral: when the instance goes, so does its ephemeral storage
(mechanisms exist in Smart Cloud Provisioning to provide persistence,
if needed by some scenarios).
When creating a new instance, since
master images are read-only resources and are replicated across the
storage cluster, Smart Cloud Provisioning uses the Copy-on-Write
(CoW) technology and the iSCSI protocol to stream them avoiding
expensive copying. Each iSCSI session results in a valid block device
to be created in the host OS. Of course each guest OS (corresponding
to a given instance) requires a writable block device representing
the main disk of the system. All supported hypervisors have a storage
virtualization layer which includes the Copy-on-Write technology. For
example, KVM's qcow2 files can be configured to implement CoW
by referencing a backing storage device. VMWare has something called
redo files which effectively do the same thing as well. In each case,
the hypervisor can natively use the CoW file referencing the iSCSI
block device to expose a virtual block device to the virtual machine. Depending on the hypervisor and guest
OS this device will show up as something like /dev/sda or c:\. The CoW files are stored locally on the
hypervisor's file system. When the instance is terminated, the
Smart Cloud Provisioning agent will simply discard the CoW file and
check if any other instances are using the same iSCSI device. If the
device is no longer in use, the agent will also tear down the iSCSI
Thanks to the above infrastructure the
action of provisioning a new virtual machine results in a very fast
and reliable process that allows to create individual systems in tens
of seconds and of peak requests of 1000s of systems per hour.
If you're interested in trying the
Smart Cloud Provisioning product, you can download a trial version
from the following link:
IBM® Tivoli® Service Automation Manager (TSAM) 7.2.2 introduces the concept of extension, a set of TSAM software components that can implement a new IT service automation solution (known as a service definition) or add capabilities to existing service definitions.
This article (Deploy a J2EE app with TSAM extensions) defines a scenario in which the desired result is to securely deploy a three-tiered enterprise application (a J2EE app) to the cloud. It demonstrates how to set up and provision extensions in TSAM as the first step to accomplishing this task. Then it describes how to standardize the three-tiered business application and provision it using standard TSAM offerings.
The second part of the article (Manage a J2EE app with TSAM extensions
) focuses on the management aspects of the J2EE app. The authors explain how to add and remove application servers as the workload of the business application changes; and how to modify the security settings and why you might need to do that.
With December's release of IBM SmartCloud Monitoring, Tivoli's venerable IBM Tivoli Monitoring product family, proven in data centers at the world's largest corporations, begins to adopt a "Cloud" posture. Sure, "Cloud" is a term bereft of a clear operational definition that we can apply at any given moment, and customers, analysts and vendors tend to bandy it about pretty freely these days. However, if we don't get too hung up what Cloud is or isn't, we can probably agree that it represents a migration from our traditional server-delivered infrastructure to one comprised of pooled computing resources shared by virtual workloads. Whether or not our customers are calling their virtualized environments "private clouds" today, and whether or not they've got a "cloud budget" that they're using for such initiatives, the fact that they're moving along the cloud maturity continuum at some pace seems inescapable, given IDC's assertion that we crossed the magical "50%" boundary last year, when half of all corporate workloads were running on virtual machines instead of physical ones.
If we're beginning to think in terms of clouds of pooled computing resources, it makes sense that we begin to deliver management solutions in the same way, right? If the server administrators, storage administrators and network administrators now report to a cloud administrator, we should begin to package solutions for those cloud administrators, combining multiple pieces of management technology into a single part number that customers can purchase and deploy. That's exactly what we've done with SmartCloud Monitoring. The discrete monitoring agents that are at the heart of IBM Tivoli Monitoring; OS monitors, application monitors, storage, etc., are as important as they ever were. Even though we're pooling those resources across virtual machines, we still have to monitor things like processes, CPU activity, IO throughput, and so on. We just need to add a layer on top of all that granular detail, so the cloud administrator can see, at a glance, what's healthy or unhealthy about his cloud environment, before drilling down into the nuts and bolts.
SmartCloud monitoring combines the VMware virtualization management features in ITM for Virtual Environments with virtual machine instance monitoring from ITM's operating system agents, to monitor a cloud infrastructure and the workloads running on it.
Our roadmap looks like an analyst's cloud maturity ladder, adding features such as automated provisioning, usage and accounting integration, and more detailed network monitoring, so our solution will "mature" along with the market, and customers' needs. See if the challenges along this ladder look like things that you or your customer have faced on their cloud journey, or are grappling with now. It's important to note that Tivoli has solutions that can be applied to each step, and for each problem. What SmartCloud promises is a way to bring those solutions together into more consumable bundles, tightly integrated together, to make cloud management simple to purchase and simple to deploy.
SmartCloud Monitoring delivers key capabilities for optimizing and maintaining a private cloud, including:
- Health dashboards, to provide an instant, consolidated glimpse into cloud health
- Topology views of the key interrelated components of the cloud
- Reports on the health trends of cloud components and workloads, powered by Cognos
- What-If capacity planning scenariosPolicy-Based optimization to put workloads where they’ll perform best, not just where they’ll fit
- Performance Analytics for right-sizing of virtual machines
- Integration with industry-leading Tivoli service management portfolio
I'm a big fan of standardization. I'm a big fan of using non-persistent images as well. They just make my life so much easier.
The only issue I see with them is the need anyway to provide to the end user some configuration and customization possibilities.
It could be something trivial like having your own screen saver, or a special keyboard and language configuration or it could be
something like connecting the softwares inside the image to some specific devices, disks or additional, external software.
I even do not want to think about having a master image for each of the possible situation. This would simply make my image catalog
so uselessly big that it would shortly become unmaneageable. I would even not mention all the possible issues I could have when I
need to upgrade a master image, I just need to do it for all the customized masater images derived from that one.
I will loose all the advantages of dealing with a cloud of non-persistent images. I'm only sliding the issue from the virtual machine
instances to the master images themselves.
A possible solution would be to have the user reconfigure his VM everytime he starts it: unbereable! ...especially if you think about
complex software stacks.
I found interesting the solution included into IBM SmartCloud Provisioning. What you can do with that is to allow the end user to specify
a set of configuration parameters at image deployment time so that the image will be automatically configured accordingly at boot time.
The idea under the cover is pretty easy: the image builder inserts in the master image a script that is run at system boot.
The script is supposed to be able to parse the information passed by the end user at VM deployment time and takes the needed action like
reconfiguring the operating system or a specific software.
All information inserted by the end user in the web user interface are saved on the compute node and then injected back into the deployed instance
If you are worried about the fact the end users might be reluctant to type in information in a specific format (a possibility is to let him
deal with free text, but then you'll get mad in parsing it) and that the process could be error prone, consider that if you use Image
Construction and Composition Tool (an optionally installable component inside IBM SmartCloud
Provisioning), the web UI gets automatically modified to show the end user the parameters you may want him to put in.
Of course if you are a lazy end user and you do not want to type in information or remember them (especially if you do it frequently),
you can type your input parameters in a file and use the command line to deploy the image passing the file as one of the deployment parameters.
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
- Rapidly scalable deployment designed to meet business growth
- Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
- Reduced complexity through ease of use and improve time to value
- Reduced IT labor resources with self-service requesting and highly automated operations
- Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
With the new release of IBM SmartCloud Provisioning this week, you can try and see firsthand the potential of this breakthrough technology to accelerate your journey to cloud. And if you want a preview of what’s in development, you can join our Open Beta program for access to beta-level code.
Starting from December 9th 2011 IBM SmartCloud Provisioning 1.2 is available for download.
The key features introduced in this release are:
Full product install through an interactive tool:
IBM SmartCloud Provisioning can now be installed using a graphical
wizard. Two flavours of the installer minimal and custom. The custom
installation allows to specify the number of instances of HBASE and
Zookeeper to be deployed. Moreover it allows to automatically configure
ESXi servers as compute node. The creation of the management virtual
image on VMware is automated.
Support for multiple networks:
you can now deploy images with more than one NIC. Different users can deploy images in segregated networks.
Integration of the Image Construction and Composition Tool:
The Image Construction and Composition Tool
helps building and customizing master images. It is designed to
facilitate a separation of concern and tasks, where experts build software bundles for reuse by others. This design approach greatly simplifies the complexity of virtual image creation and reduces errors.
Support of Open Virualization Format (OVF):
OVF images that can be created or modified by the Image Construction and Composition Tool
OVF metadata can be displayed and modified in the Self Service UI
Integration of the Virtual Image Library component:
The Virtual Image Library helps managing the life cycle of virtual images:
-Search images for specific software products
-Compare two images and determine the differences in files and products
-Find similar images
-Track image versions and provenance
The cloud administrator can use a brand new UI to perform tasks like
registering images, registering networks, managing quotas, assigning
roles, managing elastic IPs
The IBM® Image Construction and Composition Tool is a web application that simplifies and automates virtual image creation for public and private cloud environments, shielding the differences in cloud implementations from its users.
This white paper provides Software Specialists and other product experts with helpful tips and techniques to plan, design, and create software bundles in the Image Construction and CompositionTool.
The DBMS placement in Cloud Solutions based on Tivoli Provisioning Manager (TPM) / Service Automation Manager (TSAM) / Service Delivery Manager (ISDM), plays a significant role into overall product function, performance, and how this relates to the evolving workload.
A typical setup approach is to install TPM/TSAM with the DBMS co-located.
This is the default setup option in the TSAM installation and TSAM-VM-image which is included in the ISDM solution.
Over time, based on increasing workload, capacity planning, or production requirements, it may be desirable to move the local database to a remote node, with the goal to achieve greater scale and to exploit additional resources.
A white paper
is available for this purpose in the Integrated Service Management Library.
The referenced paper, has been recently updated to version 2.4, and describes how to relocate the DBMS in existing TPM / TSAM / ISDM solutions.
A very interesting Cloud Computing case study of the Capgemini Infrastructure as a Service delivery platform project has been recently published on the Web:
The case study shows how one of the world’s leading infrastructure outsourcing providers has seen the business opportunity of offering to its clients a cloud-based solution that combines the benefits of a high-value infrastructure service provider with the cost advantages of Cloud computing. Capgemini focused the new cloud based services on delivering to their clients Infrastructure as a Service capabilities with much higher flexibility and substantial cost-efficiency.
In partnership with IBM, Capgemini built a fully integrated cloud delivery platform for clients in the UK and USA leveraging the Tivoli Service Delivery Manager solution that includes the IBM Tivoli Service Automation Manager, Tivoli Monitoring and Tivoli Usage Accounting Manager products. On top of the IBM hardware BladeCenter HS22V and XIV Storage System technologies.
The key aspects of the solution built by Capgemini has been:
- Implementation of a resilient and scalable global infrastructure with capability of managing resource pools in different regions and with a modular design for quick scale out
- Single solution that enables to manage a wide range of platforms and architectures that does not tie to any specific hardware technology or vendor. Ability to choose the right hypervisor and guest OS platforms for the right workload
- Multi-Customer shared infrastructure providing secure network separation between customer environments
- Automation of network management and configuration that enables to support multiple network domains per customer and linkage to the customers private networks
- Extensible service catalog to fit the needs of the Capgemini customers
- Ability to quickly on-board existing Capgemini customer workloads.
IBM® Tivoli® Service Automation
Manager (TSAM) has delivered yet another cloud extension that provides service
offerings for automating the provisioning of network attached storage (NAS)
with an NFS export name. The file systems can then be mounted into virtual
machines provisioned within TSAM Virtual Servers Projects. The
extension introduces the concept of Storage-only Project, which
allows managing the entire life-cycle of the file systems (create, expand, set
access, and destroy), in a secure multi-tenant environment. It works in
integration with IBM N series and NetApp FAS series
storage systems as sketched in the picture below.
Once you download the installation
package from the Integrated Service Management Library (http://www.ibm.com/software/ismlibrary?NavCode=1TW10TS0F) and install it on top of TSAM 7.2.2
platform, your cloud administrator can easily configure the Extension for
Network Attached Storage to provision NFS-mountable file systems. In fact, the
extension provides a plug-in to the Cloud Storage Pool Administration
TSAM application where she can enter the hostname of the workstation running the
OnCommand NetApp management software, and the credentials to
access it. Then the extension automatically discovers all the storage resources (NetApp
Datasets) from the underlying storage systems and makes them visible as
TSAM Storage Pools. At that point the cloud administrator can regulate
access to the storage resources using the TSAM way of associating storage pools
and quotas to customers,
and that’s it, the extension is configured. Now you can delegate to your
customers the management of storage up to the assigned quota: the customer
administrators can start requesting storage for their virtual servers by
creating storage projects and add, expand, and delete file systems. The entry
point for this is the Tivoli Self Service Station – Storage Management folder
(showed in the picture below).
the Create Storage Project offerings brings a simple user interface for
requesting file systems and assigning them to teams of users (see example pictures
The customer administrator has to
enter a prefix for the NFS export name, a TSAM Storage Pool from which to carve
the storage, and the size of the file system, that’s it. She can decide to
create many file systems with same characteristics by increasing the value of
the “Number” spin control. She can decide to make the file systems available to
all the teams of the customer by checking the “Access to All Teams” box: by
default the storage is only visible to the team of users that owns the storage
Note that once the storage project
has been created, the file systems cannot be mounted yet into virtual servers because
there is no ACL set on the IBM N series boxes for them. To do so, the customer
administrator creates TSAM Projects with Virtual Servers, and associates file
systems to the virtual machines belonging to the project: the extension
automatically updates the access control list (ACL) of the NFS export name
adding the IP address of the virtual machines. When the user logs in, she can
mount the file systems and use them (she gets the information of the NFS export
name with a notification e-mail).
In summary, the predefined functions
that you get with the TSAM Extension for NAS storage are:
Service offerings for managing the entire life-cycle (create, expand,
destroy, set access) of shared file systems accessible with the NFS protocol;
Service offering for authorizing virtual servers to mount storage;
Administrative graphical user interface for discovering NetApp Datasets
into TSAM Storage Pools and restricting usage by customer.
There are no predefined features to
create and manage NetApp Datasets neither vFilers to create customers silos.
For example, what if you want to automate the creation of a vFiler and of a
couple of storage pools – gold and silver, upon on-boarding of a new customer?
There are no predefined features to authorize
the shared file systems to anything but a virtual server within virtual servers’
project. What if you want to automatically attach a file system to a VMWare Cluster
as backend data store for VM images upon creation in a storage project?
Well, the TSAM Extension for NAS
storage provides low-level Tivoli Provisioning Manager (TPM) Workflows and
Tivoli Platform Automation engine (TPAe) Runbooks that can be used to implement
such automations in custom extensions that you can write based on best
practices described in the TSAM platform extensibility guide.
If you are interested in attending our daily demo sessions (see https://www.ibm.com/developerworks/mydeveloperworks/blogs/9e696bfa-94af-4f5a-ab50-c955cca76fd0/entry/new_schedule_and_agenda_for_daily_demo_sessions_of_ibm_smartcloud_provisioning2?lang=en) but:
- you do not feel comfortable with our schedule
- you would like to discuss with us about functionalities that are not covered by the current agenda
- you would like to join an exclusive, fully dedicated to you usability session
Please post your request on the open beta forum