Suppose you want to restrict users or clients to access only a subset of the virtual machines in your virtualized environment. This restriction can be imposed at the report level for the ITMfVE reports using Tivoli Common Reporting and Cognos.[Continue Reading]
cynthyap 110000GC4C Tags:  image_management cloud provisioning usage monitoring virtualization cloud_computing cloud-computing orchestration 2 Comments 29,019 Views
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
IBM SmartCloud Cost Management now provides usage metering and reporting for IBM SmartCloud Provisioning (SCP). This is now available for download on the ISM Library here: http://www.ibm.com/software/ismlibrary?NavCode=1TW10UM08
This new capability allows you to collect usage information from SCP environments using the SCP High scale low touch (HSLT) commands. The new HSLT SCP Collector gathers usage data every hour and processes it once a day. Usage, Detail and Identifier data is stored on a daily basis. The usage data is then billed, stored and can be reported on on a monthly basis.
A sample job file is provided as part of this functionality to show how to bill each access-id for the high-water mark of allocated resources in the month. The sample job file, SampleHSLT_SCP.xml is divided into three separate jobs.
The first job, SCP_collect_HSLT_hourly_data is recommended to be run every hour at XX:59. This job will run HSLT commands to collect all relevant resources for each access id that is using the SmartCloud Provisioning Service. Firstly, a list of all available access ids is collected using the command iaas-describe-accesses-by-user.
Then, for each access id, the command iaas-describe-resources-inuse-by-access is run to collect the relevant resources for that access id. The resources gathered per access id include:
Memory (MB) , Volume (GB), Number of Virtual Processors, Number of VM Instances, and Number of static IP Addresses.
The HSLT commands also provide context information that feeds into the Account Code Structure. The Account Code Structure includes the following identifiers:
The second job, SCP_Process_daily_data is recommended to be run every day some time after midnight. This job will process the daily CSR file and extract the maximum value across the day for each resource for each access id. The resource values are then stored in the cimsresourceutilization table of the SmartCloud Cost Management database. Detail and Identifier data is stored in the cimsdetail and cimsident tables of the SmartCloud Cost Management database.
The third job, SCP_Process_monthly_data is recommended to be run once a month at the start of the month. It will process the last months worth of data from the cimsresourceutilization table. It will do this by extracting the maximum value for each resource for each access id. Billing is applied to the data using the relevant SmartCloud Cost Management rate codes and the processed data is then stored in the cimssummary table of the SmartCloud Cost Management database, allowing reports to be run on the data.
The sample jobs can be customized for other charging algorithms if desired. Examples include charging on a daily (or hourly) basis in addition to or instead of on a monthly basis. Tiered pricing logic can be applied to provide for having charging amnesty for users/departments that stay below a certain threshold.
Rates are defined for each resource. These rates are used for billing purposes.
Additions have also been made to the existing SmartCloud Cost Management KVM collector to include new resources and a separate job file has been included to to add some SCP context data to the Account Code Structure, achieved by running HSLT commands.
For information about the existing TUAM KVM collector refer to the following link in the TUAM 7.3 Information Center:
The new resources for the KVM Collector include Bytes Received, Packets Received, Receive Packets dropped, Receive Packet errors, Bytes Transferred, Packtes Transferred, Transfer Packets dropped, Transfer Packet errors, Log Size of VM Image, Size of VM Image on Disk.
The new Account Code Structure for the KVM Collector contains the following identifiers: Service Region, Group, Username, Access id, VM Name
The VM Name contains the Access id allowing the information collected from the Hypervisor to be related back to the SmartCloud Provisioning identifiers.
The following reports are sample reports run on a system that has collected data from one Service Region on a SmartCloud Provisioning System:
Top 10 Pie Chart
Invoice By Account Level
Note also that other existing SmartCloud Cost Management collectors can collect information from VMWare and Power hypervisors.
See the Information Centre (http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/topic/com.ibm.ituam.doc_7.3/admin_win_dc/c_core_data_collectors.html) for details.
If you have any questions about this functionality, please contact John Buckley (John Buckley/Ireland/IBM) or Louise O'Halloran (Louise O'Halloran/Ireland/IBM).
cynthyap 110000GC4C Tags:  virtualization provider cloud csp cloud-computing cloud_computing msp service provisioning 8,697 Views
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
SandraWeiss 060000BCJJ Tags:  kvm virtualization vmware monitoring esxi health smartcloud_health solutions smartcloud provisioning cloud 1 Comment 8,963 Views
Service Health for IBM SmartCloud Provisioning has officially GA'ed and is now available on IBM Integrated Service Management Library ( ISML ).
Service Health provides pre-built integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring utilizing a custom agent, OS agents, and the ITMfVE agents. A product provided navigator offers a concise overview on the health of the IBM SmartCloud Provisioning infrastructure enabling the ability to identify and react to issues in your environment quickly minimizing the impact, such as an unresponsive compute node, high disk usage on storage nodes or key kernel services not responding. It also provides visibility into the KVM and ESXi hyper-visors.
This solution can be downloaded from the IBM Integrated Service Management Library( ISML ) following this link -> Service Health for IBM SmartCloud Provisioning
Please join the Tivoli User Community for a live Webinar and opportunity for questions, this coming Thursday, April 26, 2012 10:00 am, ET USA
TUC Webinar: Managing the Smarter Physical Infrastructure with the IBM SmartCloud Control Desk
Space is limited. Reserve your Webinar seat now at: http://tivoli-ug.org/events/community_webcasts/c/e/262.aspx
Overview: In this free webinar, the Tivoli User Community is given an exclusive opportunity to see a demonstration of this new SmartCloud offering and ask questions of the IBM product team. IBM SmartCloud Control Desk is a unique new offering that provides integrated management, and analytics-driven ITIL process automation. SmartCloud Control Desk provides a single platform—at a single price point—for managing incidents, problems, service requests, changes, configuration, releases, assets, procurement, service levels and licenses, and includes a service catalog. It is available in a wide range of delivery models, including traditional install, Software-as-a-service, and virtual machine images.
This webinar will cover some of the innovative new features in Control Desk that allow it to "automate ITIL at cloud speed" and extend the management to smarter physical infrastructures.Learn More
About The Speakers:
Spend the day in the cloud! This webinar is the hour following the presentation and demo session for IBM SmartCloud Provisioning Beta
The Official Tivoli User Community is the largest online and offline organization of Tivoli professionals in the world – home to over 160 local User Communities and dozens of virtual/global groups from 29 countries – with more than 26,000 members. The TUC community offers Users blogs and forums for discussion and collaboration, access to the latest whitepapers, webinars, presentations and research for Users, by Users and the latest information on Tivoli products. The Tivoli User Community offers the opportunity to learn and collaborate on the latest topics and issues that matter most. Membership is complimentary. Join NOW!
A presentation and demo session for IBM SmartCloud Provisioning held on thursday, April 26th at 3:00 PM central european time (CET)
The presentation will be around architectural changes in IBM SmartCloud Provisioning.
The demo will be about registering High Scale Low Touch as cloud group in IBM SmartCloud Provisioning.
No password is required
A new beta drop for IBM SmartCloud Provisioning is available.
Here below you can see the list of key functionalities included:
If you would like to try out the new features without the effort of installing the product, join the community and play with our hosted beta
If you would like to download the code, go here
One of the messages behind cloud computing is "pay-per-use": the adoption of a virtualized, standardized, self service and automated environment should come with the possibility to be charged only for the used resources.
IBM SmartCloud Provisoning comes with an idea of low complexity, low administration and ease of use.
Keeping these messages in mind, I was thinking at how to extract metering information. I had in mind something easy, doable also by people who definitely do not want to invest in programming, that does not need any modification to database tables to store historical data.
So I had a look at the available IBM SmartCloud Provisioning interfaces and I just found a couple of command line commands that could help me achieving my goal:
iaas-describe-resources-inuse-by-access and iaas-describe-accesses-by-user
The first command displays the number of images, cores and the amount of memory and disk space in use by a specific access ID. So this commands shows the key measures that in cloud computing are usually taken into consideration for usage and accounting.
The second command shows the relationship between access IDs and user IDs. This mapping helps in building metering information per user and not per access ID. In a simple environment the map is 1-1, but for example you may have the same user accessing more VM regions and so having multiple access IDs associated.
Given these two command, it is pretty straightforward to setup a couple of cronjobs/periodic tasks (depending if you would like to do it on Linux or on Windows) that with a predefined sequence ( for example once a hour) will extract this data and store it in a temporary file.
You can then have a another cronjob/periodic task that sums up all these information daily, per user, maybe adding your specific rate codes. If you choose to store this data for example in a CSV file, then you can easily import it into a reporting engine.
Antonio_Di_Cocco 060001977Q Tags:  virtual_image_consolidati... virtual_image virtual_image_library image_consolidation image-library 4,739 Views
Using IBM SmartCloud Provisioning end users can easily create and use new virtual machines without taking care about how they are running and where they came from. Users just pick up an image to be deployed from a catalog and run it. These means that in order to fulfill all requests the catalog should be as wide as possible. Theoretically it should contains all possible combination of Operative System plus software. Meaning that cloud administrators must manage thousands and thousands of image. Number of base images increase quickly improving management costs, which may lead to a much more expensive solution rather than a cost saver infrastructure. Looking to INTERNET, there are several suggestion to consolidate your image catalog to be as smaller as possible.
Best way to create and to maintain a small image catalog is to create few standard configuration depending on the user's job role. For example a developer could require an Ubuntu systems with Lotus Notes and Rational Software Architect installed on it, while a tester may need a Windows system with TEM agent, DB2 and some other middleware to run his test scenario. In this case we can define a standard so that any user can ask for end usage driven Virtual Image deployment. She/he will require to deploy an image not selecting it by its content, but based on its job.
Even if it is a good suggestion, it could not be easy to implement. In fact if you are creating your cloud solution from scratch you can force end user to select in a small catalog the best image fitting their requirement. But if you already have your cloud environment up a running for a while and your image catalog is already out of control, its consolidation could not be an easy job. Cloud administrators should open all virtual images to look into them, understanding their content. And than decide which is the most representative, making them master templates. Just think to do this job for thousands of image.
Luckily IBM Virtual Image Library will help you in this work.
One of key features for IBM Virtual Image Library is the capacity to introspect virtual image understanding their content and allowing loud administrators to compare between them. In this way they will be able to understand how much two images are similar. When you register an operation repository to IBM Virtual Image Library, a discovery process starts. During this phase information about all images and virtual machines are retrieved from remote repository. At this time only meta data about remote objects are stored locally. Moreover virtual images are indexed, reading their contents remotely.
Once registration has finished you have possibility to start working with your catalog. Most useful operation are:
As you can imagine having a so powerful tool it will be an easy job to define your master images (defining standard configurations). And to consolidate them by merging similar images reducing them to a smaller and manageable set.
Core function for previous feature is IBM Virtual Image Library capability to introspect remote images. There are two types of analysis available:
In both cases IBM Virtual Image Library will not copy any image locally, but simply connect to remote hypervisor data store to read image disk. Reducing time and network traffic spent during this operation.
IBM Virtual Image Library allows you to introspect remote images to consolidate their number removing the unnecessary one. One you consolidated them you can also import the smallest catalog into IBM Virtual Image Library reference repository allowing you to move across hypervisors. As mentioned in Image portability across hypervisors
In a dynamic cloud environment standard concepts like IP addresses and storage volumes assume a special meaning when it comes to reserving and using them regardless of the virtual machines owned by a cloud user.
The concept of Elastic IP (EIP) and Elastic Block Storage (EBS) was initially introduced by Amazon EC2 as a way to decouple the resources assigned to a cloud user from their utilization. In other words, as a cloud user you can reserve an elastic resource and assign it to one of the VMs you own, but you can also re-assign it to a different VM whenever you need (for example, whenever you need to replace your VM with a new one).
SmartCloud Provisioning offers similar capabilities exposing the concepts of Static Addresses and Persistent Volumes that can be reserved and assigned to any running VMs.
A SmartCloud Provisioning address is a statically defined address which can be dynamically bound to any instance in the cloud. In other words, a static IP address is associated with your account, not with a particular instance, and you control that address until you choose to explicitly release it.
Let’s examine more in details how it works.
When SmartCloud Provisioning creates a VM, it assigns a dynamic IP address to it, on a default management sub-network. From this point on, the system always refers to the VM using the dynamic address assigned at boot time. Nonetheless, SmartCloud Provisioning offers to cloud users the possibility of assigning a different IP address, which can be seen as a reserved and static IP.
In order to achieve this result, a centralized pool of addresses is registered by the cloud administrator and stored in a durable data service. A cloud user can then reserve one or more addresses from this pool, and can associate one of them to a specific VM he owns. Note that the cloud user does not have any clue about which address will be reserved for him; he does not even know upfront if there is any static IP address left, until he sends the reservation request.
Once a static IP has been reserved and assigned to a VM, SmartCloud Provisioning internally creates a mapping between the default dynamic address associated to the selected VM and the reserved IP address. This translates into NAT rules on the host OS's iptables to forward all traffic to the private address of that VM.
In this way you can always refer to your VM using the static address, and even if you decide to re-create the VM, you can reassign that same address to the new VM.
The address remains in your reserved list as long as you need it, and you can release it when you no longer need it.
Persistent storage is critical to any non-trivial production application. Just as Amazon's EBS has proven to be extremely valuable, SmartCloud Provisioning persistent volumes are equally powerful, offering an off-instance storage that persists independently from the life of an instance. Users can create arbitrary numbers of arbitrarily sized persistent volumes. The volumes can be dynamically attached to any VM on the cloud as long as only one instance is attached at any time.
Once attached, a persistent volume appears to the guest OS like any other raw, unformatted block device.
Each persistent volume is assigned a UUID, which can be leveraged by the cloud user to track them.
RAID sets can be easily created together to ensure each volume is hosed on a separate physical host/device.
Multiple block devices will then be exposed to the guest OS which can establish their own raided meta-devices using tools like mdadm.
Behind the scenes, these block devices are very similar to the primary boot disk of a non persistent VM. However, these are read-write iSCSI devices and directly attached to the instance without leveraging Copy-on-Write. Note persistent block storage is also hosted on the same storage cluster used for master images.
Similarly to the static IP addresses, the persistent volumes are associated with your account, not with a particular instance, and you control them until you choose to explicitly delete them.
The persistent volumes allow you to keep your data separate from the OS, offering you the possibility to move them from a VM to another whenever you need. Moreover, they offer a valid mechanism to keep your data safe when dealing with VMs that do not have a dedicated persistent storage (the non-persistent VMs managed by SmartCloud Provisioning).
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
Harini_Jagannathan 270004KU7K 4,380 Views
When you modify an existing out-of-the-box report, how do you replace IBM logo with your company logo? It’s as simple as copying your image in a couple of locations and pointing an image URL within Report Studio to your image. Let’s see how you can do this with TCR 2.1.
For TCR 2.1, images need to be stored in 2 locations:
<TIP_HOME>\profiles\TIPProfile\installedApps\TIPCell\IBM Cognos 8.ear\p2pd.war\tivoli\<product>\images\
The first directory listed above is used for HTML reports displayed within TCR. The second directory is used for Excel, PDF, and emailed HTML. [Continue Reading]
We're pleased to make available as a beta Service Health for IBM SmartCloud Provisioning. As this is a beta we welcome any and all feedback.
Service Health (Beta) for IBM® SmartCloud Provisioning provides prebuilt integrations between IBM SmartCloud Provisioning and IBM SmartCloud Monitoring. This solution allows you to easily monitor your IBM SmartCloud Provisioning infrastructure to identify and react to issues in your environment.
This solution is available via the IBM Integrated Service Management Library( ISML ). You can find it here -> Service Health for IBM SmartCloud Provisioning. Please use the "Comment or Review" link on that page to post feedback. You may also use the "Contact Provider" link as well.
AHUP_Gianluca_Bernardini 120000AHUP Tags:  provisioning cloud bot decentralized smartcloud scale-out hslt p2p 6,676 Views
SmartCloud Provisioning is designed to minimize the use of a centralized “command and control” approach, in favor of scale out management, where endpoints can participate in management activities and do not depend on a single configuration management database.
This allows SmartCloud Provisioning to handle multiple provisioning tasks in parallel, across an unlimited number of servers.
Cloud users can request deployments of virtual machines and have access to the provisioned systems in very few seconds, thanks to the parallel and distributed processing that happens transparently and under the covers.
Let’s drill down into the details about this distributed management approach.
SmartCloud Provisioning internally uses a peer to peer (P2P) messaging infrastructure to pass provisioning and management messages between agents, which contribute to the decentralized control.
Agents are installed on the compute nodes (i.e. the hypervisors) as well as on the storage nodes, where images and volumes reside.
The P2P connections between agents not only allow self-monitoring of their health in order to implement a low-touch management infrastructure, but also allow orchestrating the communications to achieve an effective load distribution and decentralized management of the requests performed by cloud users.
The P2P communication overlay is backed by a distributed lock service, which is based on ZooKeeper.
ZooKeeper is a distributed, open-source coordination service for distributed applications, which exposes a simple set of primitives that distributed applications can build upon to implement higher level services for synchronization, configuration maintenance, and groups and naming. It is designed to be easy to program, and uses a data model styled after the familiar directory tree structure of file systems.
Like the distributed processes it coordinates, ZooKeeper itself is intended to be replicated over a set of servers that must all know about each other. They maintain an in-memory image of state, along with a transaction logs and snapshots in a persistent store.
SmartCloud Provisioning agents connect to a single ZooKeeper server. Each agent maintains a TCP connection with the Zookeeper server, through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the agent will connect to a different server.
When a deployment request is received by SmartCloud Provisioning, the request is processed by the Web Services layer, passed to the management infrastructure, and managed by the agents and the ZooKeeper services.
The following steps describe in more details the internal communications, as depicted in figure 1 below.
This processing happens in a transparent way for the end user, who just sees the deployment request being served in few seconds.
As I said, this processing happens under the covers in a very fast way and the user does not have to worry about any of the steps above.
This allows reaching high levels of parallelism, decentralized management, as well as scale-out capabilities that can be easily reached by increasing the number of servers.
If you're interested in trying the SmartCloud Provisioning distributed management capabilities, you can download a trial version from the following link:
Jacques.Fontignie 270000UHFU Tags:  provenance versioning cloud virtualization image-library image library 5,467 Views
Cloud systems have made a huge
improvement in terms of tracking and performance. In “Rapid deployments with
IBM Smart Cloud Provisioning” blog, we have shown that virtual machines or
appliances can be started and configured in a matter of seconds. It has never
been so easy to create a virtual machine (VM), install software, and configure
middleware. However, with great power comes great responsibility… it is now
possible to create a VM, but what is its lifecycle? Will it be destroyed after
being used, is the starting image deprecated, or is there a better starting
image given the needed configuration and software install requirements?
IBM SmartCloud Provisioning provides a component called IBM Virtual Image Library (also known as IVIL) to solve common issues that arise in large scale virtualized environments:
VIL can be integrated simply into
your virtualization infrastructure; the only requirement to start using IVIL is
the credentials required to contact the virtualization infrastructure. No changes to your current virtualization
environment are required. After credentials
are provided, IVIL can automatically determine the provenance, state, and the
content of each virtual image or virtual machine in the virtualization
environment. After the environment is
registered you will have a clear picture
of your various images, their content, history, and similarity with one
another. More important, as soon as IVIL
is used in the infrastructure, it can be used to move the images from one
hypervisor vendor to another and keep track of these migrations. To summarize, IVIL
not only keeps track of the changes of an image on one hypervisor but continues
when images are in a heterogeneous environment.
A common solution to track the contents and versioning of images is by use of a naming convention, for example, a name such as RHEL_6.1_WebSphere7.1_v2.1 implies the image is Red Hat Linux 6.1 with WebSphere 7.1 installed, and that this is version 2.1 of this image. It is feasible to use this approach with a small number of images but becomes cumbersome and confusing with anything but small examples. Basic information that is typically attempted to be conveyed includes:
Using an image naming convention can work in some cases and provide some of the needed information but it does not scale beyond a small number of simple images. To solve this, IVIL provides versioning and provenance control to understand where an image comes from:
What is provenance? Simply put provenance tracks the history of the image as it has evolved over time in the virtual environment. It tracks how the bits that make up the image came to be – through IVIL checkout operations, image clone operations, image copy operations, and so on. It is used to understand the lineage of an image from the perspective of the virtual system which might or might not match with how the user of IVIL views the image.
For example, let’s assume that you
have an image called “A”. If you decide to start this image on multiple
instances of IBM SmartCloud Provisioning or if you decide to clone this image
possibly multiple times, then IVIL will keep track of the relation between all
the created images and instances. At any time, if a security flaw is found on
A, then you can infer that the associated images and instances are likely affected
also. IVIL provides this functionality not only for a single virtual
environment, but across heterogeneous virtual environments also.
What is versioning? Versioning is the logical user-defined lineage of an image or virtual appliance; it is the way a user would think of versioning his or her image functionality, for example this is version 2 of my AccountsPayableService virtual image. When an image is available with a particular application version, the OS and libraries behind are often not important, only the application is. Is it important to know its template? Not necessarily, only the information about the OS is relevant. However, it is good to know the application version and if there is a newer version available for this image or if a new image has been released with the latest security patches. This is the versioning system in IVIL; it helps to understand if there are other versions of the application in the infrastructure, if some applications contain a patch or not.
To summarize, provenance is
oriented to infrastructure administration whereas versioning is more oriented
towards applications and workloads.
For example, let’s assume that we want to provide version 1.0 of software S as image. By default, users can decide to use software S and trigger any instance of image A. At a certain point, the version 1.0 is deprecated and we must upgrade software S to version 1.1. Unfortunately, the OS distribution must be upgraded. A solution is to reinstall the OS from scratch and install S version 1.1 on it; this new image will be called B. These images do not have any common lineage from a provenance perspective, however the content has a logical lineage to the user. Image A is the parent of image B from a versioning perspective.
It is important to understand that
an image can have only one provenance parent but can have multiple version
parents. The second claim makes sense because an image may have multiple
applications installed and thus each one may be associated to a logical
This concludes the introduction of Virtual Image Library component in IBM SmartCloud Provisioning. Next time, I will introduce the concept of similarity between images and the power that it provides in terms of debugging, infrastructure consolidation, licensing cost, and more.
I've been impressed by the speed of provisioning a set of virtual machines in just a few tens of seconds using IBM Smart Cloud Provisioning. In most cases you can get a running virtual machine in less than one minute.
The Smart Cloud Provisioning technology has been devised and particularly optimized for managing the following cloud infrastructure scenarios:
Many other workloads can be deployed and easily automated on top of Smart Cloud Provisioning. For example, traditional stateful applications can be easily deployed for simple HA solutions. Anyway you get the maximum performances from Smart Cloud Provisioning when operating in the context of the above scenarios.
To achieve such high performances Smart Cloud Provisioning has been designed focusing the attention to an optimized virtualization infrastructure based on OS streaming: no need to copy large image files over the network when provisioning.
Image copying is the single biggest bottleneck in VM provisioning today both in terms of CPU, memory, I/O and bandwidth usage. In traditional Cloud provisioning approaches all of this overhead is system resource that is just pure overhead (nobody builds a Cloud to provision systems - provisioning is an overhead that is required to have systems on which business workload is deployed, and any overhead is in conflict with the business workload).
The key element of such infrastructure are the so called ephemeral instances, that are virtual machines having no persistent state. Once they get terminated all the data associated with them is deleted as well. They are clones of a master image and these clones will have a primary virtual disk which is ephemeral: when the instance goes, so does its ephemeral storage (mechanisms exist in Smart Cloud Provisioning to provide persistence, if needed by some scenarios).
When creating a new instance, since master images are read-only resources and are replicated across the storage cluster, Smart Cloud Provisioning uses the Copy-on-Write (CoW) technology and the iSCSI protocol to stream them avoiding expensive copying. Each iSCSI session results in a valid block device to be created in the host OS. Of course each guest OS (corresponding to a given instance) requires a writable block device representing the main disk of the system. All supported hypervisors have a storage virtualization layer which includes the Copy-on-Write technology. For example, KVM's qcow2 files can be configured to implement CoW by referencing a backing storage device. VMWare has something called redo files which effectively do the same thing as well. In each case, the hypervisor can natively use the CoW file referencing the iSCSI block device to expose a virtual block device to the virtual machine. Depending on the hypervisor and guest OS this device will show up as something like /dev/sda or c:\. The CoW files are stored locally on the hypervisor's file system. When the instance is terminated, the Smart Cloud Provisioning agent will simply discard the CoW file and check if any other instances are using the same iSCSI device. If the device is no longer in use, the agent will also tear down the iSCSI device.
Thanks to the above infrastructure the action of provisioning a new virtual machine results in a very fast and reliable process that allows to create individual systems in tens of seconds and of peak requests of 1000s of systems per hour.
If you're interested in trying the Smart Cloud Provisioning product, you can download a trial version from the following link:
AHUP_Gianluca_Bernardini 120000AHUP Tags:  mttr mttf p2p recovery-oriented-computi... provisioning roc smartcloud availability virtualization 5,722 Views
Modern Cloud infrastructures are built leveraging thousands of highly distributed servers, used to provide services directly to customers over the Internet. The service provider has two extremely important objectives, which, unfortunately, are to some degree contrasting: a) ensure continuous availability of the Cloud service, and b) contain the cost of the infrastructure and administration (CAPEX and OPEX).
There are several factors that have an impact on the availability of services, mostly related to infrastructure failures. Failures are not only related to unrecoverable hardware outages, but also to recoverable OS or middleware failures.
Not so long ago, the most common approach to high availability was to assume one could deploy infrastructures with the highest Mean Time To Failure (MTTF) possible, which required expensive systems and assumed the possibility to write error-safe software applications. It was also assumed that some degree of down-time was acceptable, with vendors boasting of the number of 9's that they could support (e.g. 99.999% availability). In today's always-on Internet, any downtime of major services becomes headline news. The traditional approach is no longer applicable, and a new approach has to be considered.
Given the requirement to reduce infrastructure costs, service providers are using commodity hardware. Given also the requirement to reduce operational costs, hardware failures are commonly dealt with by directly replacing the failed component rather than manual debugging and recovery by skilled (and expensive) administrators. Thus, to maintain the objective of continuous availability of the service, the Cloud system must be built in order to expect failure of the underlying infrastructure, and not only for temporary periods but it must assume that components will disappear forever. This cannot be limited to only hardware components, as no matter how well a software element is tested, unexpected edge conditions will appear at some point-in-time. So, to guarantee continuous availability, a Cloud solution must also expect its own components to fail too.
Given that we are forced to expect failure, the high MTTF approach is no longer valid, and instead we have to increase availability by flipping the approach to minimizing Mean Time To Recovery (MTTR). The quicker the system can recover from failure, the higher the availability of the service will be. Given however that even a tiny percentage of downtime is no longer acceptable, we also need a means to maintain service availability during the recovery process. One way of doing this is through providing redundancy of all critical services within the Cloud solution.
SmartCloud Provisioning is designed according to the ROC principles, because it is based on a highly distributed, redundant and robust infrastructure, with near zero downtime, and automated recovery across heterogeneous platforms, and it does not require expensive systems, but can run on a relatively low-cost commodity infrastructure.
The key factors that allow SmartCloud Provisioning to be a low-touch and robust cloud infrastructure are the following:
Let's consider some typical failure scenarios that can happen in a real environment and let's see how the SmartCloud Provisioning is designed to tolerate them and react appropriately.
First example is related to the management agents that are used by SmartCloud Provisioning to perform the standard provisioning operations.
Management agents are deployed on both the compute nodes and the storage nodes and are organized in dynamic hyerarchies, where a leader (manager) is dynamically elected. The leader is just the entry point for distributing the requests across the infrastructure and a coordinator of any operation, but this role does not imply any special information being associated with the agent itself (stateless infrastructure): any agent can be a leader.
All the agents have a watch-dog mechanism that is used to prevent, detect and correct failures; they also monitor each other in the neighborhood and can start simple actions to fix other agents issues.
So, if an agent fails, the watch-dog mechanism tries to restart it. If the watch-dog is not able to restart the agent, neighbours try some simple actions to restart the failing agent. If the agent cannot be restarted, the system keeps on working without that node, thanks to the redundant infrastructure.
If the failing agent was a leader, and it cannot be restarted, the managed agents can re-elect their leader dynamically, without losing any information.
Another example is related to failures either in a storage node or in a compute node.
If a storage node fails, thanks to the redundant deployment and to the multiple copies of the same image available in the storage cluster, the deployment of VMs can continue without issues, and the leader agent will try to restart the failing node.
If a compute node fails, the leader detects the failures and stops sending requests to that node. Moreover it tries to restart the node, forcing a fresh copy of the compute node to be re-deployed via PXE boot.
If you're interested in trying the SmartCloud Provisioning product, you can download a trial version from the following link:
cynthyap 110000GC4C Tags:  cloud-computing computing cloud provisioning automation virtualization 5,605 Views
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
- Rapidly scalable deployment designed to meet business growth
- Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
- Reduced complexity through ease of use and improve time to value
- Reduced IT labor resources with self-service requesting and highly automated operations
- Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
A very interesting Cloud Computing case study of the Capgemini Infrastructure as a Service delivery platform project has been recently published on the Web:
The case study shows how one of the world’s leading infrastructure outsourcing providers has seen the business opportunity of offering to its clients a cloud-based solution that combines the benefits of a high-value infrastructure service provider with the cost advantages of Cloud computing. Capgemini focused the new cloud based services on delivering to their clients Infrastructure as a Service capabilities with much higher flexibility and substantial cost-efficiency.
In partnership with IBM, Capgemini built a fully integrated cloud delivery platform for clients in the UK and USA leveraging the Tivoli Service Delivery Manager solution that includes the IBM Tivoli Service Automation Manager, Tivoli Monitoring and Tivoli Usage Accounting Manager products. On top of the IBM hardware BladeCenter HS22V and XIV Storage System technologies.
The key aspects of the solution built by Capgemini has been:
New extensions released for TSAM 7.2.2 extend core capabilities and offer customers
• More secure customer networks
• Faster response, effectiveness and adaptation to Cloud users
• Lower storage costs managing shared virtual file systems
Network Extension for Juniper
Additional Virtual Disk Extension
Power is supported within Tivoli Service Automation Manager 7.2.2
Load Balancer Extension
Load balancing is one of the key values of any cloud project. The load balancer extension enables the definition of rules to automatically distribute the workload amongst VMs in the project whilst providing a single entry point (Virtual-IP) to external users (i.e. it presents itself as a single powerful machine to the user). Key Features include:
Two more TSAM extensions for Netapp Storage and Costing Preview will be available in December 2011.
The TSAM 7.2.2 extensions are available free of charge and can be downloaded from the IBM Service Management Library using the links bellow.
Network Extension for Juniper – Download here
Additional Virtual Disk Extension – Download here